Search Results

Search found 1902 results on 77 pages for 'nginx'.

Page 52/77 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • github url style

    - by Alex Le
    Hi all, I wanted to have users within my website to have their own URL like http://mysite.com/username (similar to GitHub, e.g. my account is http:// github. com/sr3d). This would help with SEO since every profile is under the same domain, as apposed to the sub-domain approach. My site is running on Rails and Nginx/Passenger. Currently I have a solution using a bunch of rewrite in the nginx.conf file, and hard-coded controller names (with namespace support as well). I can share include the nginx.conf here if you guys want to take a look. I wanted to know if there's a better way of making the URL pretty like that. (If you suggest a better place to post this question then please let me know) Cheers, Alex

    Read the article

  • what is the responsibility of ngnix or apache in rails application?

    - by user2177410
    How rails applications actually work? Let's say we have nginx + passenger + Ubuntu, so my questions are: What is nginx actually doing? How does it transfer requests to the rails app? what is passenger responsible for? And what is rack? How actually rails app can work just on webrick without apache? Please dont give me answers like "nginx processes the requests"; I need something more, or may be you know the source where I can read about this.

    Read the article

  • ApacheBenchmark ab - SSL read failed - closing connection

    - by chantheman
    When I am running ab on my website I get a ton of these responses: SSL read failed - closing connection SSL read failed - closing connection SSL read failed - closing connection And some times it is successful. I am on a MacBook Pro 10.7.2. What is weird is, someone else does the same test on a very simular machine, not OS Lion, right next to me and has no problems. Any ideas? I am sure this is something on my machine because I get ab to work all over the place. The command is simply: ab -c 100 -n 1000 https://mywebsite.com One other thing, when I look in the nginx logs, I do see some requests coming in from the ab so it is working some. And also, the logs do not show the failed ones.

    Read the article

  • Recommendation for PHP-FPM pm.max_children, PHP-FPM pm.start_servers and others

    - by jaypabs
    I have the following server: Intel® Xeon® E3-1270 v2 Single Processor - Quad Core Dedicated Server CPU Speed: 4 x 3.5 Ghz w/ 8MB Smart Cache Motherboard: SuperMicro X9SCM-F Total Cores: 4 Cores + 8 Threads RAM: 32 GB DDR3 1333 ECC Hard Drive: 120GB Smart Cache: 8MB I am using ubuntu 12.04 - nginx, php, mysql with ISPConfig 3. Under ISPConfig 3 website settings: I have this default value: PHP-FPM pm.max_children = 10 PHP-FPM pm.start_servers = 2 PHP-FPM pm.min_spare_servers = 1 PHP-FPM pm.max_spare_servers = 5 PHP-FPM pm.max_requests = 0 My question is what is the recommended settings for the above variable? Because I found some using a different settings.

    Read the article

  • Setting php values in php-fpm confs instead of php.ini

    - by zsero
    I'd like to set values in php-fpm conf files what are normally set in php.ini. I'm using nginx. I've created the following setting, but I'm not sure if this would work. php_value[memory_limit] = 96M php_value[max_execution_time] = 120 php_value[max_input_time] = 300 php_value[php_post_max_size] = 25M php_value[upload_max_filesize] = 25M Do you think if this is OK like this? What happens when a value is both set in php.ini and in php-fpm conf files? The php-fpm overrides the ini one? Finally, isn't it a problem that this way I can set different values for all virtual hosts? I mean php.ini seems like a global setting, while this is host dependent. Can different hosts run with different memory-limits, etc?

    Read the article

  • Enabling fastCGI for php in PLESK 9.3 or use ningx

    - by Saif Bechan
    I have a server that runs PLESK. I can set php to use 3 options: apache module, fastCGI application, cgi application And i have a different option that just says enable fastCGI support. Which option is the best to choose from? I run an php application with mysql and some ajax. Heavy reads and writes, it is a busy website. Second thing is will there be much difference if i install nginx to work with this. there is some sort of hack i can use to make ningx work on port 80 and apache on port 8080. I don't know if this is worth my while. thanks for your time folks!

    Read the article

  • Benchmarking MySQL on win7

    - by Patrick
    I've setup a nginx server running php 5.3.6 and mysql 5.5.1.3. My computer is an AMD quadcore 9650, 4gb ram, 500gb 7200rpm HD. I ran the PHP MySQL Benchmark Tool v. 0.1, and got the following results: Testing a(n) MYISAM table using 100000 rows. Successfully created database speedtestdb Sucessfully created table speedtesttable Table Type Verified: MYISAM .. Done. 100000 inserts in 19.73628 seconds or 5067 inserts per second. Done. 100000 row reads in 0.2801 seconds or 357015 row reads per second. Done. 100000 updates in 4.03876 seconds or 24760 updates per second. I'm wondering where this stands as far as performance goes, and what are some steps I can take if any to improve on this. I'm not trying to make anything fantastic, just getting a feel for how to best optimize a web server in this configuration.

    Read the article

  • Operation not permitted when starting Unicorn

    - by fiskeben
    I've created an nginx/unicorn/capistrato setup on Ubuntu (Amazon EC2) by following mostly this guide. I guess everything is set up like it should but when I start Unicorn I get (a LOT of) this error in the log: E, [2012-09-08T08:57:20.658092 #12356] ERROR -- : Operation not permitted (Errno::EPERM) /home/deployer/apps/bridgekalenderen.no/shared/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/worker.rb:82:in `initgroups' I see it's related to the user's permissions but I just can't figure out what I've left out. The server starts up nicely if I start it with sudo (or, rvmsudo, really). The user has sudo capabilities, I have chmod'ed the app several times so the file permissions there should be ok. The unicorn socket in /tmp is owned by the deployer user, so that shouldn't be the problem either. Does anybody have a clue where to look?

    Read the article

  • How to restart php-cgi automatically with spawn-fcgi

    - by mrm8
    I'm running nginx with php as fcgi. It's working just fine, however, php-cgi keeps on exit()ing after serving 500 requests. I tried increasing that value (PHP_FCGI_MAX_REQUESTS), and that worked, but that seems to be a workaround. Then I set it to 0, and it didn't exit() yet. But I think there's a reason why php-cgi should be restarted. At the moment, I'm running php-cgi with spawn-fcgi: when the php process exits, spawn-fcgi exits, too. Now, is there a way to automatically restart php (without dirty hacks like while [ 1 ]; do spawn-fcgi; done etc)?

    Read the article

  • Suggest me a good php-fpm configuartion

    - by Werulz
    I am configuring a server for a friend.The server has the following specs 8GB RAM Quad Core processor 1 TB HDD 100 mbps port However all php files are loadking very slowly.I did a speedtest and server takes 16 secs to Load FIRST byte.I strongly believe its my php-fpm configuration.Server uses nginx and php only , no mysql etc... My current php-fpm configuration pm.max_children = 50 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 35 Server load and ram usage are perfectly fine Please suggest me a good configuration for this server UPDATE: This configuration works fine pm.max_children = 20 pm.start_servers = 7 pm.min_spare_servers = 5 pm.max_spare_servers = 10 pm.max_requests = 100 The problem with first byte load time is solved.However after like 15-20 hours First byte load time increase gradually. I have to reload php-fpm to get small load time Based on my conf above what i modify to it so that first byte load time remain small and i don't have to restart it:P

    Read the article

  • spawn-fcgi/ fast CGi php crashes without traces in logs, on Gentoo

    - by user39046
    Hello, I recently moved from apache to a Nginx/fastcgi solution, I had it running on a Fedora system and had no problems, but, since i moved all to Gentoo , the Spawn-fCGI / fastcgi php daemon dies, and i can't find out any errors reports on /var/log/messages , so i don't know why this happens. I've seen that fastcgi is somehow different from the fedora distro, on gentoo as it has different conf files and init.d startup scripts, Can someone help me make it more stable? The number of requests that i had isn't any different from the ones I had on fedora, so i use the default conf that comes with the distro..and in about some hours it simply dies... Thank you very much

    Read the article

  • Server Configuration / Important Parameter for 500Req/Second

    - by Sparsh Gupta
    I am configuring a server to be used as nginx server for a very heavy traffic website. It is expected to receive traffic from a large number of IP addresses simultaneously. It is expected to get 500Req/Second with atleast 20Million unique IPs connecting it. One of the problems I noticed in my previos server was related to iptables / ipconntrack. I am not aware of this behaviour and would be glad to know which all parameters of a ubuntu / debian (32/64) bit machine should I tweek to get maximum performance from the server. I can put in a lot of RAM on the server but mission critical task is the response times. We ideally dont want any connection to be hanging / timing out / waiting and want as low as possible overall response times. P.S. We are also looking for a kick ass freelancer system admin who can help us figuring / setting this all up. Reach me incase you have some spare time and interested in working on some very heavy traffic website servers.

    Read the article

  • Issue varnish purge through CloudFlare to Varnish

    - by Michael
    I've been working on this for a while and can't seem to find any solution. I have varnish sitting in front of my nginx server, with CloudFlare sitting in front. When I issue a curl -X PURGE host CloudFlare picks it up and of course denies it with a 503 error. If I use direct.host to bypass CloudFlare it hits the Varnish server and it accepts the request but it does nothing since direct.host isn't used so there is nothing in the cache for that url. I am using WordPress and there is a WordPress Varnish Purge plugin, it says to add the following line to wp-config.php: define('VHP_VARNISH_IP','127.0.0.1') This is specifically to work with proxy servers and/or CloudFlare to make sure the request goes to the Varnish server rather than CloudFlare, but that doesn't seem to help. Anyone see this before and have any idea?

    Read the article

  • how to best config for synflood setup in csf but web response still fast

    - by Binh Nguyen
    my server down random every day 4-5 time cause get high load very quick.. I have install csf and with some config server now stable.. load around 5. BUT the big isuse is : the real user very hard to access website specially from IE browser you can test at xaluan.com the flowing is config using in csf: SYNFLOOD = "1" SYNFLOOD_RATE = "100/s" SYNFLOOD_BURST = "10" CONNLIMIT = "80;30" PORTFLOOD = "80;tcp;70;5" CT_LIMIT = "29" # other config may same as default i playing around with this config for a week but still not work around.. If increase the rate SYNFLOOD_RATE = "140/s" or more.. the website response very fast.. be side have bad effect of server load increase so fast normal 20 and may be up to few hundred in peck time .. my need is response time fast but load still low.. please help thanks ps: server runing nginx frontend, apache, mysql, php ,, the home page has around 70 elements which will cached in browser in fist time access..

    Read the article

  • How do I deny all requests not from cloudflare?

    - by phillips1012
    I've recently gotten denial of service attacks from multiple proxy ips, so I installed cloudflare to prevent this. Then I started noticing that they're bypassing cloudflare by connecting directly to the server's ip address and forging the host header. What is the most performant way to return 403 on connections that aren't from the 18 ip addresses used by cloudflare? I tried denying all then explicitly allowing the cloudflare ips but this doesn't work since I've set it up so that CF-Connecting-IP sets the ip allow tests for. I'm using nginx 1.6.0.

    Read the article

  • Lots of 408 Request Timed Out from same IPs

    - by GreatFire
    Web server: Nginx. Checking our log files, there are many log entries of connections that: take 59-61 seconds send an empty request (or at least none is logged) result in a 408 response (request timed out) do not contain any http_user_agent originate from a limited number of IPs We are monitoring average times to serve responses and this obviously inflates our statistics. Apart from that though, is this a problem? Any idea why it is occurring? Does it suggest that somebody is intentionally messing with us? What should we do?

    Read the article

  • Handling range in CNAME

    - by Imran
    We have different set of CNAMEs pointing to different subdomains. These subdomains (a.domain.com, b.domain.com) are pointing to different IPs on different machines. # Server A a1.domain.com pointing to a.domain.com a2.domain.com pointing to a.domain.com .. aN.domain.com pointing to a.domain.com # Server B b1.domain.com pointing to b.domain.com b2.domain.com pointing to b.domain.com .. bN.domain.com pointing to b.domain.com Currently, we have to add individual CNAME entries (eg. a1... aN) against a single subdomain (a.dominan.com). We repeat the above process for every new server which is actually another subdomain (e.g. c.domain.com). Is there a way we can specify a range of CNAMEs (e.g. [a1..a25].domain.com point to a.domain.com) instead of adding separate CNAME etnries? Is there any possibility to handle this at DNS or webserver (apache or Nginx) level?

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Serving Meteor on main domain and Apache on subdomain independently

    - by kinologik
    I'm running a Meteor server on my Ubuntu server. But problems arise when I try to have Apache serving a subdomain on the same server. main.domain.com - Meteor sub.domain.com - Apache Meteor is running on port 80. I have previously tried to have Meteor run on port 3000 and served in reverse proxy with Nginx, but Meteor started to behave badly (tcp/websockets issues) and I spent too many evenings and nights to persist for my own sake. So I reverted my setup to have Meteor being the main server (app works fine), and then install Apache the serve my subdomain. The problem is I cannot have Apache serve on port 80 too since it seems to overrun my Meteor server. From experience, I try to stay away from reverse-proxying Meteor, but I'm not knowledgeable enough to get Apache to dedicate itself to my subdomain and without overwhelming "everything port 80" on my server. How can I have both services behave with each other in this kind of setup?

    Read the article

  • Caching without file extensions

    - by Sigurs
    I'm trying to use Varnish to show the non-logged in users a cached version of my website. I'm able to perfectly detect if the user is logged in or out, but I can't cache pages without extensions. There is no file extension because nginx is rewriting the URL to a php script(so caching .php does not work). For example I'd like varnish to cache: example.com example.com/forum/ example.com/contact/ I have tried if (req.request == "GET" && req.url ~ "^/") { return(lookup); } if (req.request == "GET" && req.url ~ "") { return(lookup); } if (req.request == "GET" && req.url ~ "/") { return(lookup); } but nothing seems to work... any help?

    Read the article

  • How to disable proxy cache when query string is empty?

    - by chx
    With nginx I have server { listen 1.2.3.4:80 proxy_cache_valid 200 302 5m; location / { try_files $uri @upstream; root $root; } } When I go http://example.com/foobar it generates a redirect to http://example.com/foobar?filter_distance=50&... which is visitor dependent so I would like to not cache this redirect. I need to bypass cache when the query string is empty. I am a bit lost because location /foobar will match both.

    Read the article

  • How do you cache web pages with a personalized header using caching reverse proxy such as Squid, Var

    - by Continuation
    Pretty much every page of my website is dynamically generated. However they don't change that frequently (kinda similar to a forum page). So I'd like to cache them using a caching reverse proxy such as Squid, varnish or Nginx. The problem is that for my logged-in users, each of them will see a personalized header saying "Welcome John Doe. Logout" on the upper right corner of the page (just like serverfault). While users who aren't logged in will see a header that says "Login" instead. So basically even though every user will see the same page in general, they all slightly different version due to that personalized header. Is there any way so that I can cache the "main" part of the page and serve it from cache while generate the personalized header dynamically for each individual user? This must be a very common problem. How is it solved in general?

    Read the article

  • How to spawn-fcgi multiple fcgi processes ?

    - by Shrinath
    We have nginx installed and would like to spawn-fcgi multiple ".fcgi" files. The programs were written in C. How do we spawn all the files at one go ? Edit This is the scenario : I have 3 different programs to serve. Lets say, I've search results from google, yahoo, bing. I want to show 3 columns which host results of above providers. I have 3 fcgi scripts, one for each provider. How do you suggest I put all 3 into action ?

    Read the article

  • What solutions do I have to enforce memory limit on PHP server?

    - by Zulgrib
    I would like to enforce memory limit on a folder basis (and have it applied on subfolders) but I don't want the user able to change the memory limit. I know I can disable ini_set and I know I can enforce a hard limit or deny ini_set with Suhosin. With the first one, I doubt it will block changing it from the user.ini file, for the second, the user may still be able to change it to the hard limit I enforce with Suhosin. In both cases, I would prefer to not entierly block ini_set because it may have a legit use for other settings. In case it is important, I'm using PHP version 5.4.4 with nginx (PHP in FPM mode)

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >