Search Results

Search found 15035 results on 602 pages for 'request'.

Page 146/602 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Only one domains not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I tried it on severs at completely separate organizations on different networks(Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • OS X Apache giving 503 error for anything in /api directory

    - by WilliamMayor
    I have a locally hosted website that uses Smarty templates, I'm trying to get started on building an API for the site. I've used virtualhost.sh to create a local virtual host for this and other sites. I've discovered that if I put a directory called api at the root of any of these virtual hosts I will get a 503 error when I try to access anything inside. I am using mod-rewrite but so far only to append a .php extension when needed. Here are the error logs for a request: [Thu Feb 09 13:42:37 2012] [error] proxy: HTTP: disabled connection for (localhost) [Thu Feb 09 13:49:06 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:8080 (localhost) failed [Thu Feb 09 13:49:06 2012] [error] ap_proxy_connect_backend disabling worker for (localhost) The middle line gave me a clue to look in my hosts file because why would a request go to [fe80::1]:8080? I commented out that line and tried again, this time the error was in connecting to the standard 127.0.0.1 localhost. I have concluded that perhaps there is some config file somewhere picking up the underlying request of localhost/api and pointing it somewhere other than my virtual host. At this point my ability to fix the problem fails me. Can anyone help?

    Read the article

  • firefox addon f@stestfox API sending/collecting data?

    - by Richard
    System: ubuntu64/firefox24.0 object: addon "f@stestfox". Its a nice in-browser search tool and more. Problematic: is the way the program handles the search queries. when I use a search shortcut, burpsuite says: request to msgs.smarterfox.com: 80 GET /log_msg?name=popup_bubble_searched&search_engine_title=Search%20Startpage&source=FastestFox&redirect_to=https%3A%2F%2Fstartpage.com%2Fdo%2Fsearch%3Fcmd%3Dprocess_search%26cat%3Dweb%26query%3Dnginx%26language%3Denglish%26no_sugg%3D1%26ff%3D%26abp%3D-1&rand=856827465 HTTP/1.1 Host: msgs.smarterfox.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive once I saw a unique identifier (installation time?) was send with the request to the server. Am I right, that the addon sends the website I am looking at to the server? Sometimes I only mark text(ip adress or link) and the addon send this data? seriosly? I did: search for the url in the code, but I dont speak java. And I am not sure, if the data from the request can actually be used for tracking :) question: I want the awesome features of the addon, without connecting to their server: marked text should be send only to the searchmachines. what should I do next? thank you.

    Read the article

  • Why my 2nd ip from traceroute is not answering the ping anymore?

    - by Pedro77
    My Internet is really laggy today, I did a tracerout and I realize that I'm having no answer from an ip at the beginning of the traceroute. see: Tracing route to 12.129.202.154 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 192.168.0.1 2 * * * Request timed out. 3 8 ms 8 ms 8 ms bd044008.virtua.com.br [189.4.64.8] 4 9 ms 8 ms 8 ms bd044009.virtua.com.br [189.4.64.9] 5 26 ms 26 ms 24 ms embratel-T0-1-5-0-tacc01.cas.embratel.net.br [200.174.243.21] 6 360 ms 15 ms 12 ms ebt-T0-15-0-12-tcore01.ctamc.embratel.net.br [200.244.140.218] 7 330 ms 349 ms 261 ms ebt-Bundle-POS11942-intl04.mianap.embratel.net.br [200.230.220.10] 8 139 ms 141 ms 139 ms sl-st30-mia-.sprintlink.net [144.223.64.221] Connection diagram: PC - Router configured as access point - Router (192.168.0.1) - Cable modem (192.168.100.1). Well, I think it is odd that the 2nd ip is not returning the ping. I looked some old tracerout logs to see what was the 2nd ip. The ip was: 10.19.0.1 So, what this 2nd ip stand for? How can I find why it is not answering the ping? I don't understand it, if does not answer the ping, how can the packets continue (yeah newbie question)? edit: well, because the hope 3 have a ping of 8 ms the hop 2 request time out should really not be a problem. But it is still odd that the 2nd hop stopped to answer ping request. So my doubts are: 1. Were the ip 10.19.0.1 is from? 2. Why it stopped to answer ping requests? 3. How can hop 7 be smaller than 6 and 8 smaller than 7 and 6!?? Shouldn't the pings be higher for each hop? Like: hop 3 time should be the sum of the hops before it plus its own time (hop 3 = 1+2+3) ??

    Read the article

  • MS DNS lookup issue

    - by 3molo
    Hi, Got two AD/DNS servers, and on the secondary I can't seem to lookup the external site www.iis.se (or any other hostname that their name servers control). The central firewall at this office allows any any outbound, tcp and udp. The DNS server has no local firewall nor antivirus. My windows client, located in the same subnet as the DNS server can do the lookup by asking the nameservers that are in control of www.iis.se. 'dig NS iis.se' shows iis.se. 2517 IN NS ns2.nic.se. iis.se. 2517 IN NS ns.nic.se. iis.se. 2517 IN NS ns3.nic.se. on AD/DNS server C:\Users\Administratornslookup www.iis.se 212.247.7.228 Server: UnKnown Address: 212.247.7.228 Name: www.iis.se Addresses: 2a00:801:f0:80::80 212.247.7.221 C:\Users\Administratornslookup www.iis.se 194.17.45.54 Server: UnKnown Address: 194.17.45.54 Name: www.iis.se Addresses: 2a00:801:f0:80::80 212.247.7.221 C:\Users\Administratornslookup www.iis.se 212.247.3.83 Server: UnKnown Address: 212.247.3.83 Name: www.iis.se Addresses: 2a00:801:f0:80::80 212.247.7.221 And still: C:\Users\administratornslookup www.iis.se Server: UnKnown Address: 127.0.0.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. * Request to UnKnown timed-out

    Read the article

  • transparently proxying a firewalled web application from a non-standard port to port 80

    - by Terrence Brannon
    I have a web application that serves on port 8088 on $server. However, the only port accessible from remote on $server is port 80. Furthermore, only CGI programs can execute on port 80. I would like to write a CGI program accessible via port 80 that allows one to use the web app running on port 8088. From my view, an ideal solution would be some sort of Java web browser that simply opened up a window and allowed me to use the program running on that port. The CGI program would simply initiate a web browser applet or something. I wrote a Perl CGI program that does it, but I really would like a more transparent solution: my $q = new CGI; print $q->header; use LWP::Simple; use HTML::Tree; my $base = "http://localhost:8088"; my $request = $base; my $qurl = $q->param('url'); if (length($qurl) > 1) { warn "long $qurl"; $request = "$base$qurl"; } else { warn "short $qurl"; } my $content = get($request); my $tree = HTML::TreeBuilder->new_from_content($content); my @a = $tree->look_down('_tag' => 'a'); for my $a (@a) { my $url = $a->attr('href'); next if index($url, '#') > -1 ; $url = "?url=$url"; $a->attr(href => $url); } print $tree->as_HTML;

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Asterisk terminating outbound call when picked up, sends 'BYE' message

    - by vo
    I'm running Asterisk 1.6.1.10 / FreePBX 2.5.2.2 and I've got an outbound trunk setup. Everything use to work fine until recently (perhaps due to upgrade to FC12 or other things I'm not sure). Anyway the setup does not appear to have issues registering and setting up the call, RTP packets go both ways and you can hear the ringing from the other side. However it appears that when the call is picked up or thereabouts, the incoming RTP packets cease. Upon closer inspection with Wireshark, there are these particular packets that seem to be the cause: trunk->asterisk SIP/SD Status: 200 OK, with session description asterisk->trunk SIP Request: ACK sip:<phone>@trunk:6889 asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 [..about a dozzen RTP packets in/outbound..] trunk->asterisk SIP Status: 200 OK, CSeq: 104 Bye [..outbound RTP continues, phone is silent..] Then the inbound RTP packets cease, however the asterisk logs dont show any activity at this point. The last entry reads 'SIP/ is answered SIP/'. Then when you hangup the extension, you get asterisk->trunk SIP Request: BYE sip:<phone>@trunk:6889 trunk->asterisk SIP Status: 481 Call Leg/Transaction does not exist My trunk peer settings in FreePBX are: username=<user> fromuser=<user> canreinvite=no type=friend secret=<pass> qualify=no [qualify yes produces 401/forbidden messages] nat=yes insecure=very host=<sip trunk gateway> fromdomain=<sip trunk gateway> disallow=all context=from-pstn allow=ulaw dtmfmode=inband Under sip_general_custom.conf i have stunaddr=stun.xten.com externrefresh=120 localnet=192.168.1.1/255.255.255.0 nat=yes Whats causing Asterisk to prematurely end the call and still think the call is in progress? I have no idea where to look next.

    Read the article

  • How do I resolve certificate errors on HP blade center

    - by Martin Hilton
    I'm trying to sort out the ssl certificate errors that we get when trying to manage our HP c7000 blade enclosures. To that end I have created a signing certificate and imported it into the browser. In Onboard Administrator I created a certificate signing request, which I signed with my CA and then uploaded the certificate. This worked perfectly, and I no longer get any SSL errors when connection to Onboard Administrator. The problem comes when trying to connect through Onboard Administrator to the iLo on the blades themselves. Done by clicking on the "Web Administration" link. Onboard Administrator links to the blade with it's IP address rather than host name. But the certificate signing request that iLo creates uses the host name. Even when this certificate is signed the browser still complains it is for the wrong domain. I either need to be able to get Onboard Administrator to connect to the blades using host name rather than IP address, or get a certificate signing request which contains the IP address as the CN rather than the host name. It doesn't particularly matter which. Does anybody know how to configure this?

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load

    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • Can I use IIS to do ActiveDirectory single-sign-on for another website?

    - by brofield
    I'm trying to add Active Directory single-sign-on support to an existing SOAP server. The server can be configured to accept a trusted reverse-proxy and use the X-Remote-User HTTP header for the authenticated user. I want to configure IIS to be the trusted proxy for this service, so that it handles all of the Active Directory authentication for the SOAP server. Basically IIS would have to accept HTTP connections on port X and URL Y, do all the authentication, and then proxy the connection to a different server (most likely the same X and Y). Unfortunately, I have no knowledge of IIS or AD (so I am trying my best to learn enough to build this solution) so please be gentle. I would assume that this is not an uncommon scenario, so is there some easy way to do this? Is this sort of functionality built into IIS or do I need to build some sort of IIS proxy program myself? Is there a better option for getting the authentication done and the X-Remote-User HTTP header set than requiring IIS? Update: For example, what I am trying to create is: [CLIENT] [IIS] [AD] [SOAP-SERVER] 1. |---------------->| 2. |<--------------->|<---------->| 3. |--------------------------->| 4. |<---------------------------| 5. |<----------------| 1. POST to http://example.com/foo/bar.cgi 2. Client is not authenticated, so do authentication 3. Once validated, send request to server (X-Remote-User: {userid}) 4. Process request, send response 5. Forward response to client I need to know how to configure IIS to do the automatic authentication of the user using AD, and then to proxy the request to the actual server, sending the userid in the X-Remote-User HTTP header.

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • How to block/avoid a particular IP when connecting to websites?

    - by Mark
    I'm having trouble connecting to a particular website. I can view it through a proxy, but not from home. So I ran a traceroute: Tracing route to fvringette.com [76.74.225.90] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms <snip> 2 * * * Request timed out. 3 9 ms 7 ms 27 ms rd2bb-ge2-0-0-22.vc.shawcable.net [64.59.146.226] 4 8 ms 7 ms 7 ms rc2bb-tge0-9-2-0.vc.shawcable.net [66.163.69.41] 5 10 ms 9 ms 9 ms rc2wh-tge0-0-1-0.vc.shawcable.net [66.163.69.65] 6 27 ms 23 ms 22 ms ge-gi0-2.pix.van.peer1.net [206.223.127.1] 7 18 ms 18 ms 20 ms 10ge.xe-0-2-0.van-spenc-dis-1.peer1.net [216.187.89.206] 8 9 ms 11 ms 10 ms 64.69.91.245 9 * * * Request timed out. 10 * * * Request timed out. ... Looks like this "64.69.91.245" is somehow blocking me. Can I tell my computer to avoid/bypass that IP when trying to connect?

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • nginx and proxy_hide_header

    - by giskard
    When I curl for a URL I get this answer back: > < HTTP/1.1 200 OK < Server: nginx/0.7.65 < Date: Thu, 04 Mar 2010 12:18:27 GMT < Content-Type: application/json < Connection: close < Expires: Thu, 04 Mar 2010 12:18:27 UTC < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 < Transfer-Encoding: chunked I want to remove < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 So I used the proxy_hide_header directive in this way: location / { if ($arg_id) { proxy_pass http..authorized; break; } proxy_pass http..anonymous; proxy_hide_header http.context.path; proxy_hide_header jersey.response; proxy_hide_header http.request.path; proxy_hide_header http.status ; } But it doesn't work. any clues?

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • nginx projects in subfolders

    - by Timothy
    I'm getting frustrated with my nginx configuration and so I'm asking for help in writing my config file to serve multiple projects from sub-directories in the same root. This isn't virtual hosting as they are all using the same host value. Perhaps an example will clarify my attempt: request 192.168.1.1/ should serve index.php from /var/www/public/ request 192.168.1.1/wiki/ should serve index.php from /var/www/wiki/public/ request 192.168.1.1/blog/ should serve index.php from /var/www/blog/public/ These projects are using PHP and use fastcgi. My current configuration is very minimal. server { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; root /var/www; index index.php index.html; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } } I've tried various things with alias and rewrite but was not able to get things set correctly for fastcgi. It seems there should be a more eloquent way than writing location blocks and duplicating root, index, SCRIPT_FILENAME, etc. Any pointers to get me headed in the right direction are appreciated.

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Why is 32-bit-mode required in IIS7.5 for my app?

    - by Jonas Lincoln
    I have a .net4 web application running in a 64 bits 2008 server. I can only get it to run when I set the app pool to Enable 32-bits application to true. All dlls are compiled for .net4 (verified with corflags.exe). How can I figure out why Enable 32-bit is required? The error message from the event log when starting as a 64-bit app-pool Event code: 3008 Event message: A configuration error has occurred. Event time: 2011-03-16 08:55:46 Event time (UTC): 2011-03-16 07:55:46 Event ID: 3c209480ff1c4495bede2e26924be46a Event sequence: 1 Event occurrence: 1 Event detail code: 0 Application information: Application domain: removed Trust level: Full Application Virtual Path: removed Application Path: removed Machine name: NMLABB-EXT01 Process information: Process ID: 4324 Process name: w3wp.exe Account name: removed Exception information: Exception type: ConfigurationErrorsException Exception message: Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Could not load file or assembly 'System.Data' or one of its dependencies. An attempt was made to load a program with an incorrect format. at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) Request information: Request URL: "our url" Request path: "url" User host address: ip-adddress User: Is authenticated: False Authentication Type: Thread account name: "app-pool" Thread information: Thread ID: 6 Thread account name: "app-pool" Is impersonating: False Stack trace: at System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) at System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() at System.Web.Configuration.AssemblyInfo.get_AssemblyInternal() at System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) at System.Web.Compilation.BuildManager.CallPreStartInitMethods() at System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters, PolicyLevel policyLevel, Exception appDomainCreationException) Custom event details:

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >