Search Results

Search found 50062 results on 2003 pages for 'http 1 1'.

Page 75/2003 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • tomcat5 HTTP 400 BAd Request

    - by Oneiroi
    OS is centOS 5.5 x64, rpm's are as follows: tomcat5-jsp-2.0-api-5.5.23-0jpp.9.el5_5 tomcat5-common-lib-5.5.23-0jpp.9.el5_5 tomcat5-servlet-2.4-api-5.5.23-0jpp.9.el5_5 tomcat5-server-lib-5.5.23-0jpp.9.el5_5 tomcat5-5.5.23-0jpp.9.el5_5 tomcat5-jasper-5.5.23-0jpp.9.el5_5 telnet localhost 8080 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. GET / HTTP/1.0 Host: localhost HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Date: Thu, 16 Sep 2010 15:06:21 GMT Connection: close alternatives --display java output: alternatives --display java java - status is manual. link currently points to /usr/lib/jvm/jre1.6.0_21/bin/java /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java - priority 16000 slave keytool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/keytool slave orbd: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/orbd slave pack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/pack200 slave rmid: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmid slave rmiregistry: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/rmiregistry slave servertool: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/servertool slave tnameserv: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/tnameserv slave unpack200: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/unpack200 slave jre_exports: /usr/lib/jvm-exports/jre-1.6.0-openjdk.x86_64 slave jre: /usr/lib/jvm/jre-1.6.0-openjdk.x86_64 slave java.1.gz: /usr/share/man/man1/java-java-1.6.0-openjdk.1.gz slave keytool.1.gz: /usr/share/man/man1/keytool-java-1.6.0-openjdk.1.gz slave orbd.1.gz: /usr/share/man/man1/orbd-java-1.6.0-openjdk.1.gz slave pack200.1.gz: /usr/share/man/man1/pack200-java-1.6.0-openjdk.1.gz slave rmid.1.gz: /usr/share/man/man1/rmid-java-1.6.0-openjdk.1.gz slave rmiregistry.1.gz: /usr/share/man/man1/rmiregistry-java-1.6.0-openjdk.1.gz slave servertool.1.gz: /usr/share/man/man1/servertool-java-1.6.0-openjdk.1.gz slave tnameserv.1.gz: /usr/share/man/man1/tnameserv-java-1.6.0-openjdk.1.gz slave unpack200.1.gz: /usr/share/man/man1/unpack200-java-1.6.0-openjdk.1.gz /usr/lib/jvm/jre-1.4.2-gcj/bin/java - priority 1420 slave keytool: /usr/lib/jvm/jre-1.4.2-gcj/bin/keytool slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: /usr/lib/jvm/jre-1.4.2-gcj/bin/rmiregistry slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: /usr/lib/jvm-exports/jre-1.4.2-gcj slave jre: /usr/lib/jvm/jre-1.4.2-gcj slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) /usr/lib/jvm/jre1.6.0_21/bin/java - priority 2 slave keytool: (null) slave orbd: (null) slave pack200: (null) slave rmid: (null) slave rmiregistry: (null) slave servertool: (null) slave tnameserv: (null) slave unpack200: (null) slave jre_exports: (null) slave jre: (null) slave java.1.gz: (null) slave keytool.1.gz: (null) slave orbd.1.gz: (null) slave pack200.1.gz: (null) slave rmid.1.gz: (null) slave rmiregistry.1.gz: (null) slave servertool.1.gz: (null) slave tnameserv.1.gz: (null) slave unpack200.1.gz: (null) Current `best' version is /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java. Same occurs trying HTTP/1.1, and I am at a complete loss as to why.

    Read the article

  • How to disable Apache http compression (mod_deflate) when SSL stream is compressed

    - by Mohammad Ali
    I found that Goggle Chrome supports ssl compression and Firefox should support it soon. I'm trying to configure Apache to to disable http compression if the ssl compression is used to prevent CPU overhead with the configuration option: SetEnvIf SSL_COMPRESS_METHOD DEFLATE no-gzip While the custom log (using %{SSL_COMPRESS_METHOD}x) shows that the ssl layer compression method is DEFLATE, the above option did not work and the http response content is still being compressed by Apache. I had to use the option: BrowserMatchNoCase ".Chrome." no-gzip' I prefer if there are more general method in case other browsers supports ssl compression or some has a version of chrome that does not have ssl compression.

    Read the article

  • haproxy not passing X_FORWARD_FOR on HTTP POST

    - by Mark L
    Hello, I've setup HAProxy with the option forwardfor option so it'll pass on the user's IP to PHP via $_SERVER[ "HTTP_X_FORWARDED_FOR" ]. If the page request isn't a POST it's populated fine but if it is then it won't be populated. Any ideas where I've gone wrong? Thanks everyone! My whole HAProxy conf file for reference: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 4096 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm :80 mode http balance roundrobin option forwardfor server webA 192.168.240.4 weight 1 maxconn 2048 check server webB 192.168.240.3 weight 1 maxconn 2048 check listen smtp :25 mode tcp option tcplog balance roundrobin server smtp 192.168.240.4:25 check

    Read the article

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • HTTP downloads slow - FTP of same file very fast - Windows 2003

    - by Paul Hinett
    I am having some issues with download speeds on my site via http, i am averaging around 70kbps downloading a file that is around 70mb. But if i connect to my server via FTP and download the same file on the same computer / connection i am averaging about 300+kbps. I know my server has alot of connections at any one time, probably around 400 connections. My server has a 1gbps connection to the internet so there is plenty of bandwidth available, as proven with the FTP. I have no throttling of any kind enabled in IIS. If interested there is a test file here you can download to check the speed: http://filesd.house-mixes.com/test.zip I am based in the UK and the server is in Washington, USA if that makes any difference. Paul

    Read the article

  • How to change HTTP_REFERER using perl?

    - by zuqqhi2
    I tried to change log format and change HTTP_REFERER using perl to change browser's referrer like below. [pattern1] Log Format : %{HTTP_REFERER}o perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; [pattern2] Log Format : %{X-RT-REF}o perl : addHeader('X-RT-REF' => "http://www.google.com"); [pattern3] Log Format : %{HTTP_REFERER}e perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; but they didn't work. How can I do it? If you have any idea please teach me. Note that I just want to do this as a countermeasure for illegal access in my intra tool.

    Read the article

  • HTTP/1.1 503 Service Unavailable Exchange 2003 OWA

    - by toups
    Receiving HTTP/1.1 503 Service Unavailable when trying to access exchange web access. Server 2003 & Exchange 2003 Have tried: Deleting virtual directories and rebuilding them in IIS Restarting Dismounting & mounting the public folder store & mailbox store It's not AppPoolQueueLength Restarting all services (all proper ones are running) Making it use ssl or not (regular http) -- same thing for both Installing IIS & exchange 2003 on a new 2003 box as a new FE server, OWA on that gets same error. Not sure why it's getting a 503... was working without an issue and randomly is stuck at this no matter what; ll users can still successfully connect and use via outlook, only the web access is having issues. Any suggestions would be appreciated.

    Read the article

  • Get OWA and ActiveSync working on server using HTTP redirect in IIS 7

    - by eric
    We have two servers on our LAN. One is a Windows 2003 Server domain controller running Exchange 2003. The other is a stand-alone Windows 2008 server running IIS 7. Our company website runs on the IIS 7 (2008) server, so the firewall forwards port 80 to this. How can I get OWA and ActiveSync to work with this setup? And without using SSL. I have tried setting up a website on the IIS 7 box (mail.ourdomain.com) and using HTTP redirect to point to http://mailserver/exchange, but this doesn't work. Do we have to purchase an SSL certificate for this to work?

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Allow HTTPS cookies but not HTTP?

    - by Ken
    I want to allow cookies for a domain but only over HTTPS -- not cookies from the same domain that come from HTTP. For example, I don't want any http://www.google.com cookies, but I do want to allow https://www.google.com cookies (because Calendars are there). Is there a way to do this? Does the goal even make sense? In Chrome, it only allows domain names, not URLs, to be added to the cookie exception list. In Firefox, it allows a protocol, but it only records the domain name, and if you click "Allow" or "Deny", it changes the same entry in the list.

    Read the article

  • how to configure Firefox to automatically reuse the login credentials like IE

    - by Black Eagle
    Multiple HTTP Authentication Prompts in Firefox We are currently working on porting our application from Internet Explorer to Firefox and the application currently uses HTTP Digest Authentication. In case of Internet Explorer, the popup dialog to enter the Username/password appears only once and the entered login credentials are reused for subsequent HTTP requests to the web server. However in case of Firefox, the Authentication popups appears whenever the request is made to the Web Server. The Web Server used is Emweb Server. We would like to know how to configure Firefox to automatically reuse the login credentials like IE.

    Read the article

  • squid configuration change to accept http request on LAN

    - by Ratan Kumar
    installed squid + dansguardian to block adult content on my linux (ubuntu 12.10) . everything worked fine. it has blocked as expected . now the problem is i am also running an apache server for my LAN . ( kind of website ) but when accessing it via 192.168.0.1 , it says squid has blocked the connection , this is the exact error The following error was encountered while trying to retrieve the URL: http: //192.168.0.16/ Connection to 192.168.0.16 failed. The system returned: (113) No route to host The remote host or network may be down. Please try the request again. Your cache administrator is webmaster. before configuring the squid it was working fine . what changes in the squid.conf i have to make . i tried acl Safe_ports 80 allow_all Safe_ports ( i want to know how i can configure it again to listen HTTP request from LAN )

    Read the article

  • A simple way to redirect http://mysite.com to http://mysite.com/mylink with Apache?

    - by Bart Silverstrim
    Just starting to try to get the hang of how all the directives and options work under Apache. I'd like to do a redirect with my one site (only running one site on the server) so that when a request comes in to http://mysite.com the server automatically redirects them to a sub-url of http://mysite.com/mylink. I have tried putting redirects into the file located in /etc/apache2/sites-enabled to rewrite this, but then the top level domain URL complains it isn't redirecting properly. I think what I want is a browser redirect, and thought using RewriteEngine On RewriteRule ^/$ /mylink [L,R] would work, but putting it into an .htaccess file didn't work (it redirected but immediately gave a 500 internal server error.) Putting it into the file in /etc/apache2/sites-enabled gives a configuration error when trying to restart Apache. I know it's something simple...but what am I missing?

    Read the article

  • HTTP 401 error in Windows Authentication disappears after swapping Providers

    - by Ray Cheng
    The IIS 7 on Windows 2008 R2 is acting really weird. We deploy our web apps as web sites with appcmd.exe. After they are deployed, if I browse to them, I'll get HTTP 401 errors. The web sites are only have Windows Authentication enabled and the providers are Negotiate and NTLM in such order. But if I swap the providers, the HTTP 401 error goes away. Even if I swap it back, the errors are still gone. So the order of the providers doesn't seem to matter, what matters is the swapping. It must have triggered something. Even if we delete the web site and application pool and reinstall the web sites, the errors are still gone. So far, we can't reproduce it easily since it happens randomly. Has anyone experienced this? How do I go about to troubleshoot it?

    Read the article

  • Sticky sessions on load balancers with HTTP and HTTPS

    - by javano
    How does sticky sessions relate to HTTP and HTTPS; If I place a load balancer in front of some web app servers that run a front end that supports HTTPS, will the sessions remain "sticky" on a typical load balancer that lists "stick sessions" as one of it's supported features? I understand that question is partly open ended; To clarify, would I require a load balancer that supports sticky HTTPS session specifical or is "sticky sessions" a principal that functions agnostic of the HTTP payload, be it encrypted or not? Thank you.

    Read the article

  • 301 redirect, canonical question

    - by Dave
    I've designed my own 'latest news' page for my site - and I'm trying to keep the URL's clean. (eg) It should look like this : http://www.domain.com/21/this-is-a-clean-url/ When someone links to the article, they sometimes mess it up and do : http://www.domain.com/21/this-is-a-clean-url/#random-hash-tag So what I have been doing is looking for "http://www.domain.com/21" and 301 (moved permantly) redirecting to the proper url + adding a canonical meta tag for it. Is this going overboard? Should I instead be using a (302 Found) header - and just let the canonical tag tell search engines what the proper URL for the article is? What is the best way of handling this?

    Read the article

  • Reserve one http slot for /server-status?

    - by Stefan Lasiewski
    I have an Apache server which is hanging for some reason. When I normally want to check on the load of an Apache server, I tend to use mod_status via the URL at http://webserver1.example.org/server-status or from the commandline like service httpd fullstatus. However today, the Server is refusing all new connections. Some mysterious problem is causing connections to stall, which means that number of connections fills up all available connections (e.g. The number of connects exceeds the MaxClients setting), and therefore neither http://webserver1.example.org/server-status nor service httpd fullstatus can return anything. Is it possible to configure Apache to reserve one or two slots for the mod_status pages?

    Read the article

  • nginx: rewrite URL but have original URL stored in access.log as 200

    - by mhambra
    I'm setting up a link tracking system, which (temporarily) involves adding /link/id/ in front of URL (like http://server/data/id/publication/id/). rewrite data/id/(.*) http://server/$1; The request is logged as: ip - - [17/Nov/2011:10:07:19 +0300] "GET /data/id/publication/id.html HTTP/1.1" 302 154 "-" "UA"` For some reason (keeping the compatibility with AWStats) it is wanted to have 200 logged instead of 302. (nginx allows to get 301 code out of box with permanent option, but thats inappropriate too) What are my options here? Will the combination of location { } and rewrite do the job?

    Read the article

  • Squid url rewrites https>>http

    - by bobfran
    I'm exploring some uses with Squid proxy 2.7 and I have seen a good number of examples for url rewrites that take urls such as: http: //somesitename.com and then the rewriter can change the url to: https: //somesitename.com And those examples work great. What I'm wondering though, is if its possible to do the reverse with a squid url rewriter. that is, to go from https: //somesitename.com to http: //somesitename.com ? Simply trying to edit the script file that handles the rewrites doesn't seem to do the trick. So I was wondering if there are some certain things I have to configure squid to do first, if its even possible to do what I am asking. I have my browser manually set up to have squid as a proxy for all requests and I can see https requests showing up in my squid access.log file (via the CONNECT method).

    Read the article

  • MongoDB REST interface not listening after update

    - by Ones and Zeroes
    I replaced the mongodb-10gen install with the Ubuntu package (mongodb-server, mongodb-client and dev). apt-get install mongodb Thereafter, I am now unable to connect to the REST interface, where it worked before. Doing a wget to http://127.0.0.1:27018, I receive the following response: Connecting to 127.0.0.1:27018... failed: Connection refused. My previous /etc/mongodb.conf file had the following in: #enable REST rest = true Adding it to the packaged conf file does not resolve the issue, not even after restarting. I also tried changing the following with no effect: # Disable the HTTP interface (Defaults to localhost:27018). # nohttpinterface = true to # Disable the HTTP interface (Defaults to localhost:27018). nohttpinterface = false I have searched for days, and there doesn't seem to be anything on the Mongo site about a similar anomaly. If you have encountered a similar issue on Ubuntu Oneiric, please add your comments, even if you haven't found a solution to this issue.

    Read the article

  • Nginx 'if' statement in http context?

    - by andy
    I want to set a variable in the http context of nginx so it applies to all my servers. However, 'if' is only supported in server & location. How can I set a variable in the http context so it will affect all servers? Might the lua module be able to help with this (although I'd rather have a pure nginx solution). If so, please provide an example. I just want to set the following variable so it applies to all servers: # only allow gzip encoding. For all other cases, use identity (uncompressed) if ($http_accept_encoding ~* "gzip") { set $sanitised_accept_encoding "gzip"; } if ($http_accept_encoding != "gzip") { set $sanitised_accept_encoding ""; }

    Read the article

  • Remove Windows 7's limitation on number of concurrent tcp connections (http web requests)

    - by Ghita
    I have an application that tries to open as many http requests as possible (in order to stress test a proxy implementation) It seems to me that Win7 (SP1) may have a limitation on number of concurrent opened connection (it may be the so called half-open state if I'm not wrong). Is there something I can do for client ? and also I test using a vista PC that acts as a proxy server. It would be great if I could configure it to sustain at least 50 new connections initiated / second on client side and many more on server. I made the modification according to this technet article by setting TcpNumConnections = 150 but it doesn't make a difference. I still only see about 20 tcp sockets associated with my http client by using tcpview.

    Read the article

  • Getting Steam.exe to run through a http proxy

    - by Kryptonite
    I'm sitting behind an http proxy which Steam refuses to go through. Trying Proxifier to fix the solution rendered an error about having to use an https proxy, though research shows that it shouldn't need one. Is it possible to make a target parameter in a shortcut? ie. "C:\Program Files\Steam\Steam.exe" --http-proxy=myusername:mypassword@SERVERNAME:8080 I have the server name and port number, though I'm yet to understand the relevance of 'myusername:mypassword', or infact which username and password these instructions were referring to. Of course, if a target parameter wouldn't work, would there be another way to get Steam to work?

    Read the article

  • Linux QoS (Skype / BitTorent / SIP / HTTP priority)

    - by Andre
    We are configuring a linux box that will act as internet gateway for an office of 30-50 computers. We are using iptables/HTB for traffic shaping. Is there a way to match traffic on L7 level? It's easy to identify traffic by TCP/UDP ports (like SIP and HTTP). But what if we are dealing with Skype & BitTorent? It was surprise for me that there is no powerful and matured sulution for tasks like this. I found only l7-filter (http://l7-filter.clearfoundation.com/) patch for the Linux kernel, but it's no longer supported (it seems to). Moreover it couldn't be compiled with modern Linux kernels. The only option I found was to use a Cisco router. Are there other ways to identify and shape Skype and Bittorent traffic?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >