Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 3/232 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The incomplete list of impolite WP7 dev requests

    In my previous list of WP7 user requests, I piled all of my user hopes and dreams for my new WP7 phone (delivery date: who the hell knows) onto the universe as a way to make good things happen. And all that’s fine, but I’m not just a user; like most of my readers, I’m also a developer and have a need to control my phone with code. I have a long list of applications I want to write and an even longer list of applications I want other developers to write for me.   Today at 1:30p...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reduce HTTP Requests method for js and css

    - by Giberno
    Is these way can Reduce HTTP Requests? multiple javascript files with & symbol <script type="text/javascript" src="http://yui.yahooapis.com/combo?2.5.2/build/yahoo-dom-event/yahoo-dom-event.js &http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"> </script> multiple css files with @ import <style type="text/css"> @import url(css/style.css); @import url(css/custom.css); </style>

    Read the article

  • The incomplete list of impolite WP7 dev requests

    In my previous list of WP7 user requests, I piled all of my user hopes and dreams for my new WP7 phone (delivery date: who the hell knows) onto the universe as a way to make good things happen. And all that’s fine, but I’m not just a user; like most of my readers, I’m also a developer and have a need to control my phone with code. I have a long list of applications I want to write and an even longer list of applications I want other developers to write for me.   Today at 1:30p...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Bingbot requests from Google IP address

    - by JITHIN JOSE
    We have some suspicious requests to our server, 74.125.186.46 - - [24/Aug/2014:23:24:11 -0500] "GET <url> HTTP/1.1" 200 16912 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" 74.125.187.193 - - [24/Aug/2014:23:24:12 -0500] "GET <url> HTTP/1.1" 200 20119 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" As it shows, user-agent shows it is bingbot. But whois data of IP address(74.125.186.46 and 74.125.187.193) shows it is from google servers. So is it Google,Bing or any other content scrappers?

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Suddenly my server reject all Post Requests

    - by Sharen Eayrs
    just go to meet-romance.com/test.htm The script there is simple. A form with a button <form action="test.htm" method="post"> <input name="Button1" type="submit" value="button" /> </form> It doesn't work. Press the button in firefox and I got connection reset thingy. I wonder why. It happens since yesterday. I have emigrated all domains that requires post requests somewhere else. I suppose a reset of server would fix that only to happen again some other time. So I wonder if anyone has a clue of why. All domains that require post have been moved to another server.

    Read the article

  • *Simple* way to block DDoS by number of requests

    - by Eduard Luca
    I have 3 Varnish 3.0.2 servers with Apache 2 as backends, which are being load balanced through a HAproxy separate server. I need to find a very simple program (I'm not much of a sysadmin), which blocks requests from an IP, if that IP has made more than X requests in Y seconds. Would something like this be achievable with a simple solution? Right now I have to block all requests manually with iptables.

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

  • ASP.NET website http requests appear to be queueing

    - by scolemann
    We cloned our servers this weekend into a colo. All non-asp.net sites are performing great, but ASP.NET sites are very slow. It appears to be an issue with the requests/connections, but I cannot figure out where. The reason I think it is a problem with the connections is that when I launch fiddler and watch the requests, all requests appear to happen sequentially. Even the static image requests are taking 5 seconds and another one doesn't start until the first one finishes. MaxConnections is set to 100 in machine.config and the "website connections" are set to unlimited. Any idea what else coudld be causing this? from machine.config:

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • Diagnose cause of long running requests in IIS 7.0

    - by Shlomi Fruchter
    We are running an ASP.NET web application on IIS 7.0, Windows Server 2008 R2, with SQL Server 2008 R2 for DB. On weekends, when the traffic is high, the request queue length in the IIS servers increase (up to 800 requests) and then drops, every minute or so. I can see that the servers are handling some requests which, according to the 'Current Requests' view in IIS Manager, are long running (thier Time Elapsed value ranges from 20 to 50 seconds). Those requests are not necessarily heavy queries, actually, I can't understand why they are taking so long. Can it be because the client is closing the connection on his side? Thanks, Shlomi

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • IIS - HTTP Redirect all requests for one virtual directory to another

    - by nekno
    How do I set up an HTTP Redirect rule to redirect all requests for a virtual directory to another virtual directory, when I don't know the hostname or complete URL, and cannot use the URL Rewrite module? The following redirects should work: http://host1/app/oldvdir -> http://host1/app/newvdir http://host1/app/oldvdir/ -> http://host1/app/newvdir/ http://host1/app/oldvdir/login.aspx -> http://host1/app/newvdir/login.aspx http://host2/app/oldvdir/login.aspx -> http://host2/app/newvdir/login.aspx I would like to place the redirect rule in the app's root web.config. I have attempted the following rules, but the end result is simply that the redirected vdir gets duplicated on the end of the original vdir until reaching the max URL length, e.g., http://host/oldvdir/login.aspx -> http://host/oldvdir/newvdir/newvdir/newvdir/... Rules in root web.config (I also have tried all sorts of combinations of settings with and without leading and trailing slashes, etc): <location path="oldvdir"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" httpResponseStatus="Permanent"> <add wildcard="*/oldvdir/*" destination="/newvdir/"/> </httpRedirect> </system.webServer> </location> <location path="oldvdir/"> <system.webServer> <httpRedirect enabled="true" exactDestination="false" destination="/newvdir" httpResponseStatus="Permanent"/> </system.webServer> </location>

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • Wireless Network Found, can't connect, repeated requests for authentication

    - by Herm Holland
    After trawling through the internet, on forums, support websites, and through dozens upon dozens of answered questions on this site, I've not found a solution to what seems like a fairly regular problem... I cannot connect to a wireless network, and am continually asked for the network password. I have tried countless suggested solutions on the different locations I've already referred to. None of them have worked. Details of my experience are as follows: I have just recently installed Ubuntu 12.04.1 (32-bit). Ubuntu installed on my system seemingly fine, and I even formatted my hard drive during the process. It's as if it were a new desktop computer. During the installation I was asked to connect to a Wireless Network. I have a USB Wireless Card connected which I have used to connect desktop PC's, laptops, and a Wii to the internet from approximately the same area of the house (thus the same distance from the Wireless Router). I chose my network, entered the correct password for it (I double checked; it's definitely the right password) and proceeded with the installation. Several times before the installation was complete, I was asked to authenticate the connection, and this seemed to do nothing each time. On the repeated screens the password was already entered in the appropriate box. When Ubuntu booted up the first thing I was faced with (other than something about Language settings, or something) was another request for authentication. Again, the password was already there, so I clicked connect. It did not connect. Instead, I was once again faced with repeated requests every few minutes. I went onto my laptop, which is connected to this network, checked the details of the network, and entered them manually into my Ubuntu PC (including the IPv4 and IPv6 information) but this didn't work either, so I set it back to finding the settings automatically. Note, also, that the "Connect automatically" and "Available to all users" boxes are checked, and have been unchecked & rechecked countless times. I have also tried having my User account connect automatically, and to need a password entered at the welcome screen. Whilst I've been writing this, it has gone through a spat of connecting successfully to the network for less than a minute, before coming offline again, only to repeat the process. But it has now returned to prompting me for a password every couple of minutes. This computer has already run on the Fedora OS, and had no trouble connecting to, and maintaining a connection. I also have a laptop running Windows 7 less than a metre away from this desktop PC, which is connected and has no trouble maintaining a connection at 50%-100% strength (fluctuating). Therefore: - I know it's not the wireless card - I know it's not the PC itself - I know it's not the access point - I know it's not the location of my PC or wireless card - It is solely because of Ubuntu Everything else has worked fine, but the moment Ubuntu was introduced into the equation, it has gone completely wrong. Honestly; I prefer Ubuntu as an OS to Fedora, but if I can't solve the problem it'll be straight back to Fedora that I'll have to go. Can anyone help me at all?

    Read the article

  • /server-status shows over 240 requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy c

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 240 requests from the 'localhost', like these. 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections. UPDATE 2010/04/16 It is 8 hours later. The /server-status page still shows that there are 243 lines which say "www.example.gov OPTION *". I believe these connections are not active. The server is mostly idle (1 requests currently being processed, 9 idle workers). There are only 18 active httpd processes on the Unix host. If these connections are not active, why do they show up under /server-status? I would have expected them to expire a few minutes after they were initialized.

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Over 200 active requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)"

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 230 requests from the 'localhost', like these: 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections.

    Read the article

  • Determining a realistic measure of requests per second for a web server

    - by Don
    I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking 1 second to return. Oddly, the machine itself wasn't even breathing hard. I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see 5000 requests per second, which is more in-line with what I expected from the machine. But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong? Forgive me if this is network engineering 101 stuff. I'm a developer by trade. Update: Edited for clarity.

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • Apache service hung and incoming requests not accepted

    - by Gnanam
    Hi, My production server is running Apache v2.2.4 with mod_mono v1.2.4 on CentOS release 5.2 (Final). Suddenly, Apache service hung during normal usage time (approx. 1 pm EDT). Traffic is not too high at this time. This is the first time we're noticing this kind of behaviour in our server. I noticed from access log that even subsequent requests are also not received, even though there were incoming requests. I then manually tried to invoke my application call from web browser, it never returned successfully but it was still loading. I found no unusual behaviour/activity in: 1) Apache access_log and error_log 2) No kernel level errors found in /var/log/messages I've no other option but ended up restarting Apache service. Any idea on what would cause Apache to hang and thereby not allowing subsequent incoming requests? How do I debug/diagnose when this happens next time? Experts advice/recommendation on this are highly appreciated.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >