Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 143/428 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • reverse proxying with NGINX to two back-end servers

    - by aag
    I am trying to learn how to configure the Nginx proxy. All requests from external (www.external.com) should go to internal server 10.10.10.16:2080, except for www.external.com/nagios requests, which should go to internal 10.10.10.18. My location block looks as follows: location ~* / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.16:2080; } # # nagios server location ~* /nagios/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.18; } The first location seems to work fine. However, any request to www.external.com/nagios sends the browser into the eternal pastures. Of course, 10.10.10.18/nagios was tested and works fine. What am I missing?

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

  • Apache mod_proxy

    - by mhouston100
    Uggh, I'm spewing that I can't figure this out, I'm so frustrated: <VirtualHost *:80> servername domain1.com.au ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <Proxy *> Order Allow,Deny Allow from all </Proxy> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] </VirtualHost> <VirtualHost *:443> servername domain1.com.au SSLEngine on SSLCertificateFile /etc/apache2/ssl/owncloud.pem SSLCertificateKeyFile /etc/apache2/ssl/owncloud.key DocumentRoot /var/www/html </VirtualHost> <VirtualHost *:*> Servername domain2.com.au ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://192.168.1.12/ ProxyPassReverse / https://192.168.1.12/ </VirtualHost> Not sure if it's clear what I'm trying to do, but I've read and read and READ, I still can't figure it out. Basically I have a working Apache server with a rewrite to force HTTPS, as seen in the first two VirtualHost entries. I now have a webmail service I set up on another server, under another domain name, however I only have one incoming public IP address. So I'm trying to have any incoming requests for the second domain to be proxied to the other server to access the webmail, whether its port 80 or 443. IMAP and POP3 are no problems, I can just forward the ports directly to the correct server. The results of the above configuration is that requests to domain2.com.au (port 80 or 443) are forwarded to https://domain1.com.au. Am I headed in the right direction?

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • MongoDB and GrifFS. What are the best storage options in the range of 1 TB?

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities?

    Read the article

  • Apache RewriteEngine, redirect sub-directory to another script

    - by Niklas R
    I've been trying to achieve this since about 1.5 hours now. I want to have the following transformations when requesting sites on my website: homepage.com/ => index.php homepage.com/archive => index.php?archive homepage.com/archive/site-01 => index.php?archive/site-01 homepage.com/files/css/main.css => requestfile.php?css/main.css The first three transformations can be done by using the following: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?(.*)$ index.php?$1 However, I'm stuck at the point where all requests to the files subdirectory should be redirected to requestfile.php. This is one of the tries I've done: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?files/(.+)$ requestfile.php?$1 RewriteRule ^/?(.*)$ index.php?$1 But that does not work. I've also tried to put [L] after the third line, but that didn't help as I'm using this configuration in .htaccess and sub-requests will transform that URL again, etc. I fuzzed with the RewriteCond command but I couldn't get it to work. How needs the configuration to look like to achieve what I desire?

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Apache forwarding to tomcat shows a blank page

    - by MNS
    I have an application running on tomcat at http ://www.example.com:9090/mycontext. The host name in server.xml points to www .example.com. I do not have localhost anymore. I am using apache to forward requests to tomcat using mod_proxy. Things work fine as long as the ProxyPath is /mycontext. The server name setup in virtual host is www .abc.com and http ://www.abc.com/mycontext works fine. However I would like to ignore the context path and simply use http://www.abc.com/ to forward requests to http://www.example.com:9090/mycontext. When I do this, apache shows me a blank page. What am I missing here? I have not changed anything in server.xml except the default host to www .example.com. <VirtualHost *:80> ServerName www.abc.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://www.example.com:9090/mycontext ProxyPassReverse / http://www.example.com:9090/mycontext </VirtualHost> Thanks

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Reset sound volume in Windows 8

    - by Svish
    There seems to be a bug in Windows 8 causing the maximum volume to become lower than it really should be. I'm now at a stage where I put the volume up to max and the sound is still very low. I have a couple of Logitech Z-10 speakers with a display on them and when I touch the increase volume button on that it shows me the volume (but not able to increase it) is actually below middle even though Windows claims it to be maxed out. Is there any way I can reset the volume in Windows 8 so that I can get it up fully max again? A registry setting or something? Really don't want to have to reinstall windows or drivers or anything like that cause if it is a bug it'll probably happen again and I really don't want to have to do that every time this happens :p Any ideas? Manged to fix it by unplugging the usb connection to my speakers, turning the volume down on my computer and up on the speakers, and finally reconnecting the usb connection. Seems to have been an issue with the speakers and not Windows this time. BUT, I'm still curious how you can adjust/reset the Windows 8 sound volume without using the volume controls. Like, where is the value of the current volume setting(s) really stored? And can you manually adjust them?

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • JBoss database connection pool configuration

    - by Qben
    I am facing an connection pool issue in my clustered JBoss installation. From time to time one of my connection pools will hit the roof and I get a lot of these in my logfile. java.sql.SQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); The odd thing is that I can see in the JMX console that the ConnectionCount hit the roof, but at the same time InUseConnectionCount is often quite small. The problem will resolve itself after a couple of minutes but during recovery phase my application will not work (for obvious reasons). The question is if this indicate an error in the configured timeouts of the connections (I pretty much use defaults), or if my pool is simply too small to handle the peaks. Under normal operation I would say I use ~40% of the configured max number of connections. The reason I just don't increase the max number of connection is that if I actually used up all connections I suspect that InUseConnectionCount would hit the roof. Hence I suspect I might have more issues than just a too small pool size. Maybe InUseConnectionCount has decreased at the time I check jmx-console and it actually do hit the roof? I tend to collect data every second minute. Any hints are more than welcome.

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    Hey guys, First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >