Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 20/416 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Users getting 'flooded' with not read notifications (NRNs) for old emails and meeting requests

    - by Exile
    I'm being placed under quite a lot of pressure from senior management over a relatively trivial issue. Basically the vast majority of users are complaining that they receive not read notifications (NRNs) for old emails and meeting requests in large numbers multiple times a day. I know something strange is happening because some are delivered at silly times in the morning (i.e 3AM or 4AM). The problem I have is that these some of these NRNs are from meeting requests and messages that are 120 days old, so some users have deleted the original message so I don’t actually know if the NRN is from an email or meeting request. This is typical of what users receive as a NRN: From: Sender Sent: 23 March 2012 04:16 To: Recepient Subject: Not read: Accepted: Status update Your message To: Sender Subject: Accepted: Status update Sent: Wednesday, November 23, 2011 8:59:00 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Friday, March 23, 2012 4:15:32 AM (UTC) Dublin, Edinburgh, Lisbon, London. ... From: Sender Sent: 18 March 2012 01:13 To: Recepient Subject: Not read: Gold delivery - Sourcing module Your message To: Sender Subject: Gold delivery - Sourcing module Sent: Friday, November 18, 2011 9:37:58 AM (UTC) Dublin, Edinburgh, Lisbon, London was deleted without being read on Sunday, March 18, 2012 1:12:37 AM (UTC) Dublin, Edinburgh, Lisbon, London. I have done a search and found the following: http://support.microsoft.com/kb/2544246 http://support.microsoft.com/kb/2471964 But we already installed 'Update Rollup 6 for Exchange Server 2010 Service Pack 1' back in December, so I am not sure what we can do to fix this?

    Read the article

  • ASP.NET Ajax Error: Sys.WebForms.PageRequestManagerParserErrorException

    - by Phil Bennett
    My website has been giving me intermittent errors when trying to perform any Ajax activities. The message I get is Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled. Details: Error parsing near ' So its obviously some sort of server timeout or the server's just returning back mangled garbage. This generally, unfortunately not always, happens when the site has been idle for a while. Anybody else been able to overcome this? Thanks

    Read the article

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • Visual Studio Add-in Exec running Automatically

    - by NewProgrammer
    Hey guys, I have a dilemma that I am uncertain about, as I not sure if it's is exactly possible for a Visual Studio Add-in to run its code automatically. I need an add-in that can run passively, like a logger for Visual Studios. However, the Exec method that I know so far can only execute commandbar functionality, but I need the code to execute when the user right-clicks, or select a line of text. I was able to make an automatic logger if i put my code in the "querystatus", but that would be considered bad programming, and it does not log when I simply select a piece of text. Does anyone know how to make a passive or automatic running code in Visual Studios?

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • PHP exec problem with s3-put

    - by schneck
    Hi there, I use the s3-bash-project to upload data to an S3-Bucket. My command looks like this: /mypath/s3_bash/s3-put -v -k '123456789' -s '/mypath/secret' -T '/mypath/upload/myuploadfile' '/my.bucket/mykeyname' I can run the command from the command line (Mac OS X), and it works well. Now I want to execute it from a PHP-Script: exec($command, $output); but in output, the "s3-put"-command only returns the command's help text. I log the command, and it works if I c&p it from the log the the command line, so there not a problem. It seems that PHP does not pass all the parameters to the command line, although I run escapeshellarg() over all the parameters. I'm using a local XAMPP-Test environment, safe_mode is off. Any ideas?

    Read the article

  • AS3 : RegExp exec method in loop problem

    - by Boun
    Hi everyone, I need some help about RegExp in AS3. I have a simple pattern : patternYouTube = new RegExp ( "v(?:\/|=)([A-Z0-9_-]+)", "gi" ); This pattern is looking for the youTube id video. For example : var tmpUrl : String; var result : Object; var toto : Array = new Array(); toto = [http://www.youtube.com/v/J-vCxmjCm-8&autoplay=1, http://www.youtube.com/v/xFTRnE1WBmU&autoplay=1]; var i : uint; for ( i = 0 ; i < toto.length ; i++) { tmpUrl = toto[i]; result = patternYouTube.exec ( tmpUrl ); if ( result.length != 0 && result != null ) { trace(result); } } When i == 0, it works perfectly. Flash returns me : v/J-vCxmjCm-8,J-vCxmjCm-8 When i == 1, it fails. Flash returns me : null When I revert the two strings in my array such as : toto = [ http://www.youtube.com/v/xFTRnE1WBmU&autoplay=1, http://www.youtube.com/v/J-vCxmjCm-8&autoplay=1 ]; When i == 0, it works perfectly : Flash returns me : xFTRnE1WBmU When i == 1, it fails : Flash returns me : null Do you have any idea about the problem in the loop ?

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • Hooking the http/https protocol in IE causes GET requests to be sequential

    - by watsonmw
    I'm using the PassthruAPP method to hook into HTTP/HTTPS requests made by IE. It's working well for the most part, however I noticed a problem. Only one download thread is active at a time. I can see two IInternetProtocol objects getting created, but IE uses only one at a time. This is happening with IE7. The odd thing is that the problem occurs when overriding the existing default HTTP/HTTPS handler, even if the handler is not the one being used to make the request. E.g. Registering a handler for the HTTPS protocol will cause HTTP requests to be made sequentially, even though HTTP requests are not hooked. I installed Google Gears and it has the same problem. This always happens for the first few items on the page, but it seems that after the document complete is issued, concurrent downloads can occur again. For example Javascript code that is executed after the page has finished loading can load images concurrently just fine. One option is to try to IAT patch the 'IInternetProtocol' registered for HTTP requests, but Google Gears does this already and it has the same problem. I know installing a HTTP Proxy is another option, but I don't want to monkey with the users' HTTP Proxy settings if there another option.

    Read the article

  • How to persist objects between requests in PHP

    - by SztupY
    I've been using rails, merb, django and asp.net mvc applications in the past. What they have common (that is relevant to the question) is that they have code that sets up the framework. This usually means creating objects and state that is persisted until the web server is recycled (like setting up routing, or checking which controllers are available, etc). As far as I know PHP is more like a CGI script that gets compiled to some bytecode each time it's run, and after the request it's discarded. Of course you can have sessions, to persist data between requests from the same user, and as I see there are extensions like APC, with which you can persist objects between requests at the server level. My question is: how can one create a PHP application that works like rails and such? I mean an application that on the first requests sets up the framework, then on the 2nd and later requests use the objects that are already set up. Is there some built in caching facility in mod_php? (for example that stores the compiled bytecode of the executed php applications) Or is using APC or some similar extensions the only way to solve this problem? How would you do it? Thanks.

    Read the article

  • NAnt exec relative path

    - by stacker
    How can I assign to trunk.dir property a relative path to the trunk location? This is my nant.build file: <?xml version="1.0" encoding="utf-8"?> <project name="ProjectName" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd"> <!-- Directories --> <property name="trunk.dir" value="C:\Projects\ProjectName" /><!-- I want relative path over here! --> <property name="source.dir" value="${trunk.dir}src\" /> <!-- Working Files --> <property name="msbuild.exe" value="C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\msbuild.exe" /> <property name="solution.sln" value="${source.dir}ProjectName.sln" /> <!-- Called Externally --> <target name="compile"> <!-- Rebuild foces msbuild to clean and build --> <exec program="${msbuild.exe}" commandline="${solution.sln} /t:Rebuild /v:q" /> </target> </project>

    Read the article

  • KERN-EXEC 3 when navigating within a text box (Symbian OS Browser Control)

    - by Andrew Flanagan
    I've had nothing but grief using Symbian's browser control on S60 3rd edition FP1. We currently display pages and many things are working smoothly. However, when inputting text into an HTML text field, the user will get a KERN-EXEC 3 if they move left at the beginning of the text input area (which should "wrap" it to the end) or if they move right at the end of the text input area (which should "wrap" it to the beginning). I can't seem to trap the input in OfferKeyEventL. I get the key event, I return EKeyWasConsumed and the cursor still moves. TKeyResponse CMyAppContainer::OfferKeyEventL(const TKeyEvent& aKeyEvent, TEventCode aType) { if (iBrCtlInterface) // My browser control { TBrCtlDefs::TBrCtlElementType type = iBrCtlInterface->FocusedElementType(); if (type == TBrCtlDefs::EElementActivatedInputBox || type == TBrCtlDefs::EElementInputBox) { if (aKeyEvent.iScanCode == EStdKeyLeftArrow || aKeyEvent.iScanCode == EStdKeyRightArrow) { return EKeyWasConsumed; } } } } I would be okay with completely disabling arrow key navigation but can't seem to do this. Any ideas? Am I going about this the wrong way? Has anyone here even worked with the Browser Control library (browserengine.lib) on S60 3.1?

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Log requests in apache upon arrival

    - by Patrick Cornelissen
    Is it possible to log requests in the apache when they arrive not when they have been processed? We have a pretty big web based application and the way it's currently logged is sometimes confusing to read, because each request with it's subrequests is logged "upside down", because apache logs the request after it has been processed, so the initial request is the last entry for this "query". We're wondering if there is a chance to log on arrival?!

    Read the article

  • Why is this the output of this python program?

    - by Andrew Moffat
    Someone from #python suggested that it's searching for module "herpaderp" and finding all the ones listed as its searching. If this is the case, why doesn't it list every module on my system before raising ImportError? Can someone shed some light on what's happening here? import sys class TempLoader(object): def __init__(self, path_entry): if path_entry == 'test': return raise ImportError def find_module(self, fullname, path=None): print fullname, path return None sys.path.insert(0, 'test') sys.path_hooks.append(TempLoader) import herpaderp output: 16:00:55 $> python wtf.py herpaderp None apport None subprocess None traceback None pickle None struct None re None sre_compile None sre_parse None sre_constants None org None tempfile None random None __future__ None urllib None string None socket None _ssl None urlparse None collections None keyword None ssl None textwrap None base64 None fnmatch None glob None atexit None xml None _xmlplus None copy None org None pyexpat None problem_report None gzip None email None quopri None uu None unittest None ConfigParser None shutil None apt None apt_pkg None gettext None locale None functools None httplib None mimetools None rfc822 None urllib2 None hashlib None _hashlib None bisect None Traceback (most recent call last): File "wtf.py", line 14, in <module> import herpaderp ImportError: No module named herpaderp

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • Using iptables to selectively route outgoing requests?

    - by Olivier
    Hello, I'd like to set up my Dd-WRT firewall so that it uses the VPN service from a VPN provider to access a bunch of destinations and normal route foe all other requests. In detail: giganews.com would be accessed thru VPN VyprVPN normal web sites such as amazon, ebay, et al thru transparent firewall. I've run nito reading SOOOO many tutorials but I can't get to understand what the different entities are. Any help? Thxs

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • Ajax/PHP - should I use one long running script or polling?

    - by Brian
    Hello, I have a PHP script that is kicked off via ajax. This PHP script uses exec() to run a separate PHP script via the shell. The script that is called via exec() may take 30 seconds or so to complete. I need to update the UI once it is finished. Which of these options is preferred? a) Leave the HTTP connection open for the 30 seconds and wait for it to finish. b) Have exec() run the PHP script in the background and then use ajax polling to check for completion (every 5 seconds or so). c) Something else that I haven't thought of. Thank you, Brian

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >