Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 6/232 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Viewing file: protocol requests made from a swf

    - by Erik Vold
    I've got a swf file in a index.html file, the swf is basically loading images, but it requests a xml file which details where the image/asset files are located. Now the index file (which is just html, js, and css) works (ie: the swf file is working) when used on this url: http://localhost:8500/core/index.html which I'm able to do with Coldfusion 8 single server development environment. But when I access the file directly with this url: file:///C:/ColdFusion8/wwwroot/core/index.html the swf file does not work. So I'm guessing that the swf is having trouble locating the files that it needs. The problem is I have no idea what file urls it is try to access atm, and both Firebug and Fiddler are not able to inform me what requests are made on the file: protocol. So is there another tool that I can use?

    Read the article

  • Vetting Github Pull requests with Hudson

    - by cdecker
    I've been using Gerrit and Hudson very successfully to test and automatically vote on new checkins in the past and now I'm wondering whether it is possible to set up Hudson so that it'll check Github at regular intervals and looks if there are new Pull Requests available. If yes it should apply the patch and run the unit tests against it, adding a comment to the pull request if no failure is detected. It would certainly reduce the amount of work going into vetting patches/pull requests. Is that possible at all, or should I stick with my Gerrit setup?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • PHP Requests Being Blocked After Making About 25 in Ten Minutes

    - by Daniel Stern
    We have an administrative portal where we run PHP functions through a Javascript portal using ajax for administrative purposes. For example, we might have a function called updateAllDatabaseEntries() which would call AJAX functions in rapid succession, with those functions each executing numerous SQL queries. The problem is after making several successive requests from the same computer (not an excessive amount, maybe 30 in ten minutes) the system will stop responding to any PHP, HTTP requests ETC ONLY from my computer. From other computers in the office the panel can still be accessed, and access is restored to this computer after about 15 minutes. We believe this is not a glitch but some kind of security feature built into our server, possibly relating to Suhosin and likely well-intentioned but currently preventing us from running our system administration. Server Info: Linux 2.6.32-5-xen-amd64 #1 SMP Tue Mar 8 00:01:30 UTC 2011 x86_64 GNU/Linux Cheers - DS

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • 1K incoming http post requests per second, each with a 10-50K file

    - by Blankman
    I'm trying to figure out what kind of server setup I will need to support: 1K http post requests per second each post will contain a xml file between 5-50K (average of 25 kilobytes) Even if I get a 100 Mb/s connection with my dedicated box (they usually give 10 Mb/s but you can upgrade), from my calculations that is about 12K kb/s which means about 480 25kb files per second. So this means I need around 3 servers then, each with 100 Mb/s connection. Would a single server running HAProxy be able to redirect the requests to other servers or does this mean I need to get something else that can handle more than 100 Mb/s to proxy things out to the other servers? If my math is off I'd appreciate any corrections you may have.

    Read the article

  • Apache redirect some requests to another server

    - by mucie
    We just bought a new server. We want our old server to respond the https connections(because of ssl certificate) and new server to respond the rest. New server is ready but i don't know how to redirect requests to new one. mydomain.com => old machine ip 10.10.10.41 => new machine Requests will come through mydomain.com. If it is https: respond else redirect to 10.10.10.41 How should i configure apache for this situation?

    Read the article

  • How to implement proper identification and session managent on json post requests?

    - by IBr
    I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. What is the best way to define session? Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? How about safety? What techniques I should use to throw away requests with malformed data, to big data, without choking server down? Should I worry?

    Read the article

  • ColdFusion Server crash after thousands of HTTP requests

    - by Jason Bristol
    We are running ColdFusion 8 on a windows server 2003 VPS with an API that exposes student records to a partner API through a connector. Our API returns around 50k student records serialized in XML format pretty seamlessly. My question originates when something very frightening happened today when we tested our connector to our partners API. Our entire website and web host went down. We assumed that our host was just having some issues and after 4 hours with no resolution and no response from their customer service we finally got a response from them claiming that they had an "unauthorized user" in their network. After our server was back up we were unable to connect to our website as if the web service or coldfusion itself had froze. This is really where my concern comes from as I fear we may have overloaded the web service. As I mentioned before we tried sending over 50k HTTP POST requests over to our partner's API, however everything stopped after around 1.6k Is this bad practice or is there some sort of rate limiting I can relax somewhere in server configuration? We managed to find a workaround, but it bypasses our connector which is critical to our design. This would have been a one time deal as the purpose of so many requests was to populate our partner's website with current data, after that hourly syncs will keep requests down to around 100 per hour. UPDATE Our partner API is owned and operated by Pardot. We are converting students to prospects by passing student data to their API which unfortunately only seems to accept one student at a time. For that reason we have to do all 50k requests individually. Our server has 4GB of RAM, an Intel Core 2 Duo @ 2.8GHz running Windows Server 2003 SP2. I monitored the server during a 100 student sync, a 400 student sync, and a 1.4k student sync with the following results: 100 students - 2.25GB of Memory, 30-40% CPU utilization, 0.2-0.3% network bandwidth 400 students - 2.30GB of Memory, 30-50% CPU utilization, 0.2-1.0% network bandwidth 1.4k students - 2.30GB of Memory, 30-70% CPU utilization, 0.2-1.0% network bandwidth I know this is a far cry from 50k students, but I don't want to risk taking down our CMS system again assuming that was the cause. To give you a look at our code: <cfif (#getStudents.statusCode# eq "200 OK")> <cftry> <cfloop index="StudentXML" array="#XmlSearch(responseSTUD,'/students/student')#"> <cfset StudentXML = XmlParse(StudentXML)> <cfhttp url="#PARDOT_CMS_UPSERT#" method="post" timeout="10000" > <cfhttpparam type="url" name="user_key" value="#PARDOT_CMS_USERKEY#"> <cfhttpparam type="url" name="api_key" value="#api_key#"> <cfhttpparam type="url" name="email" value="#StudentXML.student.email.XmlText#"> <cfhttpparam type="url" name="first_name" value="#StudentXML.student.first.XmlText#"> <cfhttpparam type="url" name="last_name" value="#StudentXML.student.last.XmlText#"> <cfhttpparam type="url" name="in_cms" value="#StudentXML.student.studentid.XmlText#"> <cfhttpparam type="url" name="company" value="#StudentXML.student.agencyname.XmlText#"> <cfhttpparam type="url" name="country" value="#StudentXML.student.countryname.XmlText#"> <cfhttpparam type="url" name="address_one" value="#StudentXML.student.address.XmlText#"> <cfhttpparam type="url" name="address_two" value="#StudentXML.student.address2.XmlText#"> <cfhttpparam type="url" name="city" value="#StudentXML.student.city.XmlText#"> <cfhttpparam type="url" name="state" value="#StudentXML.student.state_province.XmlText#"> <cfhttpparam type="url" name="zip" value="#StudentXML.student.postalcode.XmlText#"> <cfhttpparam type="url" name="phone" value="#StudentXML.student.phone.XmlText#"> <cfhttpparam type="url" name="fax" value="#StudentXML.student.fax.XmlText#"> <cfhttpparam type="url" name="output" value="simple"> </cfhttp> </cfloop> <cfcatch type="any"> <cfdump var="#cfcatch.Message#"> </cfcatch> </cftry> </cfif> UPDATE 2 I checked the CF logs and found a couple of these: "Error","jrpp-8","06/06/13","16:10:18","CMS-API","Java heap space The specific sequence of files included or processed is: D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm, line: 675 " java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.io.CharArrayWriter.write(CharArrayWriter.java:105) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:37) at coldfusion.runtime.CharBuffer.replace(CharBuffer.java:50) at coldfusion.runtime.NeoBodyContent.write(NeoBodyContent.java:254) at cfapi2ecfm292155732._factor30(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:675) at cfapi2ecfm292155732._factor31(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:662) at cfapi2ecfm292155732._factor36(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:659) at cfapi2ecfm292155732._factor42(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:657) at cfapi2ecfm292155732._factor37(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor44(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:456) at cfapi2ecfm292155732._factor38(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor46(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:455) at cfapi2ecfm292155732._factor39(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm) at cfapi2ecfm292155732._factor47(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:453) at cfapi2ecfm292155732.runPage(D:\Clients\www.xxx.com\www\dev.cms\api\v1\api.cfm:1) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) Looks like I might have crashed the JVM in CF, is there a better way to do this? We are thinking of just exporting all records initially as a CSV file and importing it into Pardot seeing as we will never have to do a request this large again.

    Read the article

  • DHCPDISCOVER requests from an off-by-one MAC address

    - by Aleksandr Levchuk
    In a Linux DHCP server I'm getting a bunch of these log lines: dhcpd: DHCPDISCOVER from 00:30:48:fe:5c:9c via eth1: network 192.168.2.0/24: no free leases I don't have any machines with 00:30:48:fe:5c:9c and I don't intend to give out an IP to 00:30:48:fe:5c:9c (whatever that could be). I tracked down the server that this is coming from and killed all the DHCP clients that were running but the DHCPDISCOVER requests do not stop. I can prove that this is the sending server by pulling the Ethernet cable - the requests stop. The strange thing is that the sending server only has 2 interfaces which are: 00:30:48:fe:5c:9a 00:30:48:fe:5c:9b What can be the cause of the off-by-one address? Who could be sending the requests? Details On the DHCP client: root@n34:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 100 link/ether 00:30:48:fe:5c:9a brd ff:ff:ff:ff:ff:ff 3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:30:48:fe:5c:9b brd ff:ff:ff:ff:ff:ff 4: ib0: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 256 link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:9f brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 5: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:a0 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff Same info: root@n34:~# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9a inet addr:192.168.2.234 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fefe:5c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72544 errors:0 dropped:0 overruns:0 frame:0 TX packets:152773 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:4908592 (4.6 MiB) TX bytes:89815782 (85.6 MiB) Memory:dfd60000-dfd80000 eth1 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9b UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:dfde0000-dfe00000 ib0 Link encap:UNSPEC HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00 BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ib1 Link encap:UNSPEC HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.3.234 Bcast:192.168.3.255 Mask:255.255.255.0 inet6 addr: fe80::202:c903:8:81a0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:2044 Metric:1 RX packets:1330 errors:0 dropped:0 overruns:0 frame:0 TX packets:255 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:716415 (699.6 KiB) TX bytes:17584 (17.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) The nodes were imaged with Perseus which uses kexec instead of rebooting.

    Read the article

  • Belkin router issue

    - by walr1
    Hi, My cousin and I bought a wireless Belkin router for testing purposes. Please keep in mind for all of our tests there is no ethernet cable plugged in, just the router's power cord. We have been trying to "flood" it with PING requests on its default address 192.168.2.1, but it isn't doing a thing; not even logging any attempts of too many requests. I've disabled the firewall, disabled PING request block, etc. Any idea why this thing isn't being affected? We sent 4 million packets and it hasn't done a thing. Quite odd! Thanks.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • Munin Aggregated Graphs Configuration Error

    - by Sparsh Gupta
    I tried making some Munin Aggregated graphs but somehow I am unable to make the configuration work. I think I have followed the instructions but since its not working, I would love some assistance or guidance as to what I am doing wrong. I want to Aggregate (sum) the total number of requests / second all my nginx servers are doing combined together. The configuration looks like [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request The munin graph I want to aggregate is http://exchange.munin-monitoring.org/plugins/nginx_request/details Thanks Sparsh Gupta

    Read the article

  • Apache / PHP Begins to Deny SQL Requests after about 2000

    - by Daniel Stern
    We have a web page on our server that we use to run administrative scripts. For example, we might run the script "unenrolStudents()" which runs 5,000 SQL SET commands one after another and sets 5000 student entries in an SQL database to unenrolled. However, we are finding that after running a few thousand queries (it is not totally consistent) we will be "locked out" by our server. SYMPTOMS OF LOCKING OUT: - unable to connect to server with winSCP - opening putty with that connection shows a blank screen (no login / pass) - clearing cookies / cache in chrome does NOT fix locking out - other computers in the office ALSO become locked out - locking out can be triggered with a high frequency of requests (10000 in 1 second) or by less over time (10000 in 500 seconds - this will still cause a lockout even though the frequency is much less) We believe this is a security feature of our own Apache. I know we are using Suhosin but I didn't configure it so I don't know. How can I disable this locking effect so that I can confidently run all my SQL requests and they will go through? Has anyone else dealt with this and found workarounds? Thanks DS

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Munin Aggregate Graphs from several servers

    - by Sparsh Gupta
    I am using DNS round robin load balancing and have divided my total traffic onto multiple servers. Each server does around 300-400req/second but I am interested in having an aggregate graph telling me the TOTAL of all requests per second served by our architecture. Is there any way I can do this. Right now each graph in Munin comes as a separate graph as they depict things on one server. I am using configuration as follow which doesn't work doesnt work for me, does this configuration got errors? [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • Hooking the http/https protocol in IE causes GET requests to be sequential

    - by watsonmw
    I'm using the PassthruAPP method to hook into HTTP/HTTPS requests made by IE. It's working well for the most part, however I noticed a problem. Only one download thread is active at a time. I can see two IInternetProtocol objects getting created, but IE uses only one at a time. This is happening with IE7. The odd thing is that the problem occurs when overriding the existing default HTTP/HTTPS handler, even if the handler is not the one being used to make the request. E.g. Registering a handler for the HTTPS protocol will cause HTTP requests to be made sequentially, even though HTTP requests are not hooked. I installed Google Gears and it has the same problem. This always happens for the first few items on the page, but it seems that after the document complete is issued, concurrent downloads can occur again. For example Javascript code that is executed after the page has finished loading can load images concurrently just fine. One option is to try to IAT patch the 'IInternetProtocol' registered for HTTP requests, but Google Gears does this already and it has the same problem. I know installing a HTTP Proxy is another option, but I don't want to monkey with the users' HTTP Proxy settings if there another option.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >