Search Results

Search found 50475 results on 2019 pages for 'rpc over http'.

Page 530/2019 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Yahoo toolbar and local sites (e.g. Intranet)

    - by Klaptrap
    We have local sites running on IIS in regular MS Windows network. User base has IE, FireFox and Chrome. Local sites are isolated by host headers and DNS record created for the common IP accordingly. This is a regular set-up. Users without Yahoo Toolbar type http://intranet and the sites resolves. Users with Yahoo toolbar type http://intranet and the toolbar goes off to search for this site in public domain. This is irrespective to whether the address is typed into the browser address bar or the toolbar. All versions of toolbar and IE are affected. I cannot see a setting on the toolbar to switch this "irritating" behaviour off and simply un-installing the toolbar is not an option. Any ideas?

    Read the article

  • FastCgiModule Error 500 on Windows7 Ultimate + IIS 7.5

    - by user63179
    I'm running IIS 7.5 on Windows 7 Ultimate. I've installed PHP Version 5.2.14 using Microsoft Web Platform Installer. I've created a virtual directory and a file with which I can browse it and it returns all the PHP information just fine. I'm trying to install MantisBT, and when I copy all the file to my virtual directory and browse index.php I receive the following error detail: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:80/mantisbt/index.php Physical Path V:\wwwroot\mantisbt\index.php Logon Method Anonymous Logon User Anonymous I've changed these in the php.ini file: fastcgi.impersonate = 1 fastcgi.logging = 0 cgi.fix_pathinfo=1 cgi.force_redirect = 0 The Handler Mappings have this information: Request path: *.php Module: FastCgiModule Executable: C:[Path to PHP installation]\php-cgi.exe Name: PHP_via_FastCGI Thank you for any advice on this!!

    Read the article

  • Highlighting duplicate column-pair and counting the rows Excel

    - by pleasehelpme
    Given the data below, the column-pair with the same values for at least 4 consecutive rows should be highlighted. image here for better visualization: http://i49.tinypic.com/2jeshtt.jpg 2 2 3 4 3 4 3 4 3 4 2 3 1 2 2 2 3 3 3 3 3 3 3 3 2 3 2 3 2 3 2 3 2 2 3 4 3 4 3 4 3 4 3 4 The output should be something like this, where the column-pair values that are the same for at least 4 consecutive rows are highlighted. image here for better visualization: http://i48.tinypic.com/i2lzc8.jpg 2 2 3 4 3 4 3 4 3 4 2 3 1 2 2 2 3 3 3 3 3 3 3 3 2 3 2 3 2 3 2 3 2 2 3 4 3 4 3 4 3 4 3 4 Then, I need to know the number of instances of the N-consecutive equal column-pair. Considering the data above, N=4 should be 3 and N=5 should be 1, where N is the number of rows that the column-pair is consecutively equal.

    Read the article

  • .htaccess rewrite all queries to static page

    - by user127219
    I have an account where hundreds of inbound links to their calender are showing up as 404 (they moved their site to a new platform). I would like to make a wildcard redirection of all URLs with a query to their old event calender to land on a new static page, and do the same for their webstore queries. I've tried several variations, but can't seem to get it to work. CASE 1: I need to redirect URLs like these (note the difference between "showDay" and "showWeek"): apps/calendar/showWeek?calID=5107976&year=2011&month=7&day=10 apps/calendar/showDay?calID=5107976&year=2011&month=9&day=10 To: http://domain.com/events/ CASE 2: And also URLs like these: apps/webstore/products/show/1927074 TO: http://subdomain.domain.com/ I can't seem to get the syntax right to take all of the URLS and redirect them. I'm looking for the equivalent of a wildcard like "apps/calendar/*" would give you at a command line. Any help is appreciated!

    Read the article

  • MySQL/Apache: Replace spaces with underscores only in certain URLs

    - by javipas
    I'm having a problem with some images I'm using on my WordPress blog. After a migration I renamed every image replacing spaces with underscores, so HIDDEN_264_4062_FOTO_IDF los MID.jpg was renamed to HIDDEN_264_4062_FOTO_IDF_los_MID.jpg But althought the trick was necessary and worked for most of the posts, some of them try to find the old image, with spaces: This is not found http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF%20los%20MID.jpg and this should be the right URL http://www.example.com/files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg Careful, though, 'cause the "%20" is only shown on the browser: the text on the database shows spaces, not "%20". I'd like to know if maybe I could make a SQL query in my WordPress MySQL database that replaces spaces in .jpg files with underscores. The path of the images is always the same, so the rule should transform this: /files/HIDDEN_264_4062_FOTO_IDF los MID.jpg /files/HIDDEN_264_4062_FOTO_IDF_los_MID.jpg the "/files/HIDDEN_264_" part is always the same, but the rest varies. Is some way to perform this? Maybe a rewrite rule on Apache (our current webserver)?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • PDF form created in Libre Office - trouble with form fields and font sizing

    - by soawesomejohn
    I am trying to create a PDF Form using LibreOffice. I can create the form elements and export as PDF. However, the form fields are giving me problems. The text in these fields always centers on the bottom, and often the text you input is cut off at the bottom. I found that if I make the fields larger, the text no longer cuts off, but the field is exceptionally large with lots of space above the text. I have made an odt (source) and a pdf (export) file to show what I'm running into. I tried a number of different fonts and sizes, but to make things easier, I made the field names all "field1" so that once you fill out one entry, all fields show as filled in. http://ytnoc.net/files/sampleapp.odt http://ytnoc.net/files/sampleapp.pdf My main question is how do I make form fields that don't cut off the text without having to make the fields way oversized? Made with LibreOffice 3.3.0

    Read the article

  • ec2 ami device mapping

    - by hortitude
    I have large ec2 Ubuntu image and I'm just looking through the devices. I noticed from the metadata that % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ami sda1 % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0 sdb However when I look what is actually mounted there is /dev/xvda1 and /dev/xvdb (and there is no /dev/sd* ) I know that both names look somewhat valid from the AWS documentation, but it looks to me from this like there is a mismatch in the instance metadata and what is actually on the machine. Why don't they match?

    Read the article

  • Differences between FCKeditor and CKeditor?

    - by matt74tm
    Sorry, but I've not been able to locate anything (even on the official pages) informing me of the differences between the two versions. Even a query on their official forum has been unanswered for many months - http://cksource.com/forums/viewtopic.php?f=11&t=17986&start=0 The license page talks a bit about the "internal" differences http://ckeditor.com/license CKEditor Commercial License - CKSource Closed Distribution License - CDL ... This license offers a very flexible way to integrate CKEditor in your commercial application. These are the main advantages it offers over an Open Source license: * Modifications and enhancements doesn't need to be released under an Open Source license; * There is no need to distribute any Open Source license terms alongside with your product and no reference to it have to be done; * No references to CKEditor have to be done in any file distributed with your product; * The source code of CKEditor doesn’t have to be distributed alongside with your product; * You can remove any file from CKEditor when integrating it with your product.

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • How to use wget to grab copy of Google Code site documents?

    - by Alex Reynolds
    I have a Google Code project which has a lot of wiki'ed documentation. I would like to create a copy of this documentation for offline browsing. I would like to use wget or a similar utility. I have tried the following: $ wget --no-parent \ --recursive \ --page-requisites \ --html-extension \ --base="http://code.google.com/p/myProject/" \ "http://code.google.com/p/myProject/" The problem is that links from within the mirrored copy have links like: file:///p/myProject/documentName This renaming of links in this way causes 404 (not found) errors, since the links point to nowhere valid on the filesystem. What options should I use instead with wget, so that I can make a local copy of the site's documentation and other pages?

    Read the article

  • How to stick my changes in httpd.conf on WHM/Cpanel/EasyApache

    - by Seiti
    I'm setting up a server and trying to configure the Apache. It only needs to work as a frontend to Tomcat. To do that I added some instructions to the VirtualHost directive, using mod_proxy: <VirtualHost *> ServerName myserver.domain.com ProxyRequests Off ProxyPass / http://myserver.domain.com:8080/ ProxyPassReverse / http://myserver.domain.com:8080/ </VirtualHost> It works fine, and if the need comes, I´ll use mod_jk. But, how do I do it the right way using easyapache, and stop it to always rewrite my changes.

    Read the article

  • Using runit and monit to run / monitor services

    - by murtaza52
    I am configuring some services to run on Ubuntu server. I was going through the link below where they use runit to run the services and monit to monitor the services - http://rubyworks.rubyforge.org/manual/monit.html http://rubyworks.rubyforge.org/manual/runit.html 1) The services are all started through monit. 2) Monit inturn starts them using runit. What is the advantage of using the above setup, where the services are run using runit via Monit. Why use runit in the middle, instead of directly starting them with monit?

    Read the article

  • Problem running application on windows server 2008 instance using amazon ec2 service and WAMP

    - by Siddharth
    I have a basic (small type) windows server 2008 instance running on amazon ec2. I've installed WAMP server on to it, and have also loaded my application. I did this using Remote desktop Connection from my windows machine. I'm able to run my application locally on the instance, however when I try to access it using the public DNS given to it by amazon, from my browser, I'm unable to do so. My instance has a security group that is configured to allow HTTP, HTTPS, RDP, SSH and SMTP requests on different ports. In fact I have the exact same security group as the one used in this blog, http://howto.opml.org/dave/ec2/ I did almost everything same as the blog, except for using a different Amazon Machine Image. This is my first time using amazon ec2, and i can't figure out what I'm doing wrong here

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • I cant browse php pages in my local server

    - by tibin mathew
    Hi, I cant browse php pages in my local server.Before it was working fine. But now i cant browse php pages, i can browse html pages and asp pages , no problems with that. But when i try to browse a php page its not loading. What will be the problem?? I am using windows 2000 advanced server and my web server is Tomcat please someone help me Guys i'm not getting anything in my browser, its just continue to loading Nothing showing in that page i'm not getting any 404 error or anything like that. its just continue to be loading for example consider my file is located under insider a folder named as myproject i can reach upto this http://localhost/projects/myproject but after that i cant browse php pages inside that... http://localhost/projects/myproject/index.php this will continue to be loading, and nothing shows in that page

    Read the article

  • configure HTTPS server on a cisco router

    - by Sara
    For the past week I was trying to configure an HTTPS server on a cisco 2900 router, I've used the following commands and assigned a username and password to privilege 15 however, when Im trying to access a given ip it requires a username and password however when I insert the username and password I configured it does not allow me to enter and i'm not sure where the problem is. Router(config)# ip http secure-server Router(config)# ip http authentication local These were the commands i used for the https server and also I used the following to assign the username and password Router(config)#username name privilege 15 secret 0 password where 'name' and 'password' represent the username and password respectively I'm trying to access the 192.168.14.1 interface on the router and the username and password i created are not authorized to enter (I got the commands from a cisco router manual)

    Read the article

  • nginx connection reset

    - by Steve
    When first visiting my site after not visiting it for a few minutes, the connection is "reset" 100% of the time. I get this message when debug is turned on, along with a 400 bad request status message: client prematurely closed connection while reading client request line I've read that this could be caused by large_client_header_buffers setting. I have google analytics on my site. Using live http headers, I get this as the request: `GET /__utm.gif?utmwv=5.3.7&utms=35&utmn=745612186&utmhn=domain.com&utmcs=UTF-8&utmsr=1920x1080&utmvp=1841x903&utmsc=24-bit&utmul=en-us&utmje=1&utmfl=11.4%20r402&utmdt=2006Scape%20Forums%20-%20General&utmhid=2004697163&utmr=0&utmp=%2Fservices%2Fforums%2Fboard.ws%3F3%2C4&utmac=UA-25674897-2&utmcc=__utma%3D68455186.1647889527.1351640625.1352446442.1352451659.100%3B%2B__utmz%3D68455186.1352097329.64.2.utmcsr%3Ddomain.com%7Cutmccn%3D(referral)%7Cutmcmd%3Dreferral%7Cutmcct%3D%2Fservices%2Fforums%2Fboard.ws%3B&utmu=q~ HTTP/1.1 my large_client_header_buffers in nginx is set to 4 8k, so I don't know if this is the problem. Immediate requests have the first "reset" request are all successful.

    Read the article

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • Host forwarding fails, server is up, domain name tests ambiguous

    - by jayunit100
    I have a domain name registered with http://www.registryrocket.com/ The "main" site, which is called rudolfcode.net, is registered under godaddy, and forwards to a heroku site (rudolfcode.herokuapp.com). I have found that the main site, rudolfcode.net works, but the hostgator forwarding has stopped working (firefox simply fails when you point to http://www.rudolflabs.com, which is the domain name registered by hostgator). How can I debug this issue ? Finally, I have tried to run some DNS tests, and here are the results : Im not sure what the failures mean .... But Im pretty sure that "Conecting to WWW Home Page" failed is a pretty bad sign ! Thanks.

    Read the article

  • DHCP and Reservations in windows server 2k3!!

    - by Fri13th
    Hello everybody! I have a problem with Configuring DHCP Reservations: in the client, ipconfig: Address Leases is: 192.168.188.20 http:/i160.photobucket.com/albums/t171/dungttvn/123.png then in the client computer: ipconfig /release but when i config the Reservations with the fix IP address is: 192.168.188.100 in the sever computer (throught vmnet1) and in the client computer: ipconfig /renew ... it's not work: the address lease is still 192.168.188.20 always! http:/i160.photobucket.com/albums/t171/dungttvn/456.png Someone help me! =.= Many Thanks!

    Read the article

  • mod_ssl RPM conflict

    - by 0A0D
    I build Apache httpd into an RPM using these sites: http://erikwebb.net/blog/compile-and-install-apache-24-red-hat-enterprise-linux-rhel-6-or-centos-6 http://ramblin-dude.blogspot.com/2013/04/compiling-rpm-for-httpd-on-rhel-57.html I was successful at building apr* and httpd*. However, when I try to install httpd using rpm -Uvh httpd-devel-2.2.25-1.x86_64.rpm httpd-2.2.25-1.x86_64.rpm mod_ssl-2.2.25-1.x86_64.rpm I get the following error: package mod_ssl-2.2.3-82.el5_9.x86_64 (which is newer than mod_ssl-2.2.25-1.x86_64) is already installed. I have httpd 2.2.3-82 installed. Do I need to remove it first? Seems counterintuitive.

    Read the article

  • How to force or redirect to SSL in nginx?

    - by Callmeed
    I have a signup page on a subdomain like: https://signup.mysite.com It should only be accessible via HTTPS but I'm worried people might somehow stumble upon it via HTTP and get a 404. My html/server block in nginx looks like this: html { server { listen 443; server_name signup.mysite.com; ssl on; ssl_certificate /path/to/my/cert; ssl_certificate_key /path/to/my/key; ssl_session_timeout 30m; location / { root /path/to/my/rails/app/public; index index.html; passenger_enabled on; } } } What can I add so that people who go to http://signup.mysite.com get redirected to https://signup.mysite.com ? (FYI I know there are Rails plugins that can force SSL but was hoping to avoid that)

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >