Search Results

Search found 4578 results on 184 pages for 'connections'.

Page 32/184 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Apache MaxClients reaching max and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • Can't connect to Sql Server 2008 named instance

    - by eidylon
    I just installed Sql 2008 Express on a new server running Windows Server 2008. I know Sql is working properly, because I can connect to the db fine locally, on the server. I cannot connect to it from a client machine though, neither by IP address nor by machine name (iporname\instance). I know I have the correct IP address, because I am RDCing into the server to perform all this configuration and setup, and if I ping the server name, it is resolving to the correct IP address as well. On the server, I have set up an inbound firewall exception allowing all traffic on any port on any protocol to sqlservr.exe. In SSMS, in server > Properties > Connections Allow remote connections to this server is enabled. In Sql Server Configuration Manager, TCP/IP is enabled in both the Protocols for <instance> and the Client Protocols sections. I looked in the Windows logs, but don't see anything about connections being denied or dropped. As far as I can see, I have everything set right, but cannot connect from a client machine. The client CAN connect to other Sql 2008 Express servers okay, so I know the client configuration is correct. Any ideas where else I can look for info of what/where/how this connection is dropping, greatly apprecaited! The error being returned by the client is: **TITLE: Connect to Server** Cannot connect to [MY.IP.ADD.RSS]\[MYINSTNAME]. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)

    Read the article

  • VPN Trunk Between Cisco ASA 5520 and DrayTek Vigor 2930

    - by David Heggie
    I'm a bit of a VPN newbie, so please go easy on me ... I'm trying to use the VPN trunking capabilities of the DrayTek Vigor 2930 firewall to bond two IPSec VPN connections to a Cisco ASA 5520 device and I'm getting myself tied in knots and hope someone here with more knowledge / experience can help. I have a remote site with two ADSL connections and the DrayTek box. The main office site has the Cisco ASA device. I am able to setup a single IPSec connection between the two sites on either of the ADSL connections' public IP addresses, but as soon as I try to use the VPN bonding, nothing works. The VPN tunnels are both still up, but the traffic is getting lost somewhere. I suspect it's due to the ASA not knowing how to route the traffic back over the VPN - one minute, traffic from my remote office's network is coming from public ip address #1, the next it's coming from public address #2 and it doesn't know what to do. Well, that's my newbie impression of what's going wrong, but I don't really know: If this is really what's happening If what I'm trying to do (bond two VPN connections from a single remote network to improve the bandwidth / resiliency) is possible with the kit I've got Could anyone help?

    Read the article

  • ISA Server dropping packets as it believes they are spoofed

    - by RB
    We have ISA Server 2004 running on Windows Server 2003 SP2. It has 2 NICs - one internal called LAN on 192.168.16.2, with a subnet of 255.255.255.0, and one external called WAN on 93.x.x.2. The default gateway is 93.x.x.1 (our modem). This machine also accepts VPN connections. We are having a problem with a scanner, which is trying to save a scan into a network share. Every time we try to scan, ISA Server logs the following Denied Connection Log type: Firewall service Status: A packet was dropped because ISA Server determined that the source IP address is spoofed. Rule: Source: Internal ( 192.168.16.54:1024) Destination: Internal ( 192.168.16.255:137) Protocol: NetBios Name Service Pinging 192.168.16.54 from the ISA Server works fine. In ISA Server, going into Configuration → Networks, there are 5 Networks : - External (inbuilt) - Internal (defined as 192.168.16.0 → 192.168.16.255) - Local Host (inbuilt) - Quarantined VPN Clients (inbuilt) - VPN Clients (inbuilt) Finally, under Network Connections → Advanced → Advanced Settings..., the connections are in the following order : - LAN - WAN - [Remote Access Connections] If we try to scan onto a workstation it works fine. Please let me know if you need any more info - many thanks. RB.

    Read the article

  • Cannot connect to postgres installed on Ubuntu

    - by Assaf
    I installed the Bitnami Django stack which included PostgreSQL 8.4. When I run psql -U postgres I get the following error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? PG is definitely running and the pg_hba.conf file looks like this: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all md5 # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 md5 What gives? "Proof" that pg is running: root@assaf-desktop:/home/assaf# ps axf | grep postgres 14338 ? S 0:00 /opt/djangostack-1.3-0/postgresql/bin/postgres -D /opt/djangostack-1.3-0/postgresql/data -p 5432 14347 ? Ss 0:00 \_ postgres: writer process 14348 ? Ss 0:00 \_ postgres: wal writer process 14349 ? Ss 0:00 \_ postgres: autovacuum launcher process 14350 ? Ss 0:00 \_ postgres: stats collector process 15139 pts/1 S+ 0:00 \_ grep --color=auto postgres root@assaf-desktop:/home/assaf# netstat -nltp | grep 5432 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 14338/postgres tcp6 0 0 ::1:5432 :::* LISTEN 14338/postgres root@assaf-desktop:/home/assaf#

    Read the article

  • Intranet Setup for Small business any resources?

    - by Rogue
    Want to setup an intranet for a small business setup. Current Setup 28 computers running Windows ( few older pc's run Windows Xp but most run Windows 7) Spare Dell Pentium 3 which can run as a server. 6 switches spare NIC's and lots of lan cable available for networking. 3 Independent Internet connections Currently we have 3 independent networks which share internet connections, each network uses a different internet connection. Current network is setup solely to share the internet connection. What I need to achieve in this intranet Setup one common network. Instant file transfer via local network (maybe setup a file server?) Local text and voice messenger software Bridge the 3 internet connections and route all the internet connections from the main server Ability to allow or deny internet access to any computer on the network. Remote access from the main server to the client pc's on the network to debug software issues What operating system should I use on the main server? Do I need a hardware firewall? Any setup guides / resources or how-to's on how I can achieve the above requirements.

    Read the article

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • NPS will not add Radius client

    - by Neobyte
    Hi all, I've just installed a fresh copy of NPS on a new 2008 R2 Std server. When I go to add a Radius client, I get "NPS Error: The service being accessed is licensed for a particular number of connections. No more connections can be made to the service at this time because there are already as many connections as the service can accept. (Exception from HRESULT: 0x80070573)". What do I do? This is the first Radius client I am installing (and the first change to the vanilla NPS since running the role installation wizard) so obviously I have not hit the 50 client max. Cheers

    Read the article

  • TCP Zero Window with no corresponding Window Update

    - by Gandalf
    I am trying to debug a network issue and am using Wireshark and tcpdump to grab packets from my server. I have one client application that is grabbing all my available connections and then holding them, trying to send A LOT of data and essentially causing an unintentional DOS attack. While debugging I notice that I see my server sending "Window Closed" and "Zero Window" TCP packets - but never sending any "Window Update" packets. I am guessing this is why the client never lets go of the connections (it still has more data to send and is waiting). Has anyone ever seen this type of behavior before? Let's not get into the reasons why I haven't set up an iptables rule to limit concurrent connections (yeah I know). I also recently changed the MTU from 1500 to 9000 - could this have such a negative effect? (Linux) Thanks.

    Read the article

  • Moving Portable Office. Have a LAN & Phone line query?

    - by John Smith
    We are about to move a portable office that has 8 phones and 12 lan connections. They are all wired back to our main switch and Nortel bcm200 phone system which are only about 20m (65ft) away. After the move the office will be 180m (600ft) away from these. What is the maximum length of cable a digital phone line can be? I am aware that a lan connection can only be 100m (305ft) when using cat5e utp. Does this rule apply to phone connections also? If so how can I extend beyond 100m for the phones? I was going to install about three cabinets, 3 switches and 6 patch panels for the lan connections but the ideal struck me tonight that maybe I could run a fibre optic line. Would this be feasible? Any feedback on this is greatly appreciated. Thanks

    Read the article

  • Network keeps disconnecting - Repairing solves it. Router configuration

    - by Joao Carlos
    My network connection keeps going insane, it will keep normal applications connected, like TeamViewer and MSN, but webpages will stop loading (Problem loading page). Everything looks connected and works like a charm, but webpages and new connections wont work. If I press "Repair" in the network connections to restart the adapter, it will work. This happens on WIRED and WIRELESS connections, both on Windows and on MacOSx. I have had this for years (different computers, routers, cities), but I never figured it out. I learn to live with it, but, I think theres probably a solution. What must I be doing wrong if this keeps happening? You guys have same thing?

    Read the article

  • iptables dos limit for all ports

    - by user973917
    I know how to use limit conntrack option to allow for DoS protection. However, I want to add a protection to limit no more than say 50 connections for each port. How can I do this? Basically, I want to make sure that each port can have no more than 50 connections, rather than globally applying 50 connections (which is what #2 does I believe?) Would I do something like: iptables -A INPUT --dport 1:65535 -m limit --limit 50/minute --limit-burst 50 -j ACCEPT or iptables -A INPUT -m limit --limit 50/minute --limit-burst 50 -j ACCEPT

    Read the article

  • Memory consumption of each accept() call on server running on Windows 2008 [migrated]

    - by Atul
    I've written a simple and small server application on Windows 2008 that just accepts connections and does nothing else. I am doing memory footprint assessment of socket calls, What I found that each connection (after accept()) consumes at least 2.5 KB of memory. Interestingly, the memory is not consumed by the process that has accept() call but it consumed by a OS process. I believe it might be because of data structures being created inside OS for each connection. Now, I have two key questions: Is it possible by any means to reduce this memory footprint (by changing any parameters, configuration etc) ? If yes how ? (Because 2K for each connection would be too much if we planning server to accept millions of connections) If my server is intended to accept million connections, is it good idea to use Windows 2008 ? or shall I switch to some other OS? Please advice me.

    Read the article

  • Big IP F5 outbound HTTP issues

    - by mbuk2k
    We've tried upgrading from 9.x to 10.2 on our F5 Big IP 3400 and everything went over fine apart from one thing. We're unable to establish any outbound HTTP (80) connections from any servers that are assigned to a virtual server. This is something that worked before and is required for certain calls our servers need to make. Interestingly HTTPS (443) connections work fine, it's literally just anything outbound over port 80 seems to fail. Does anyone know if anything has changed between 9.4 and 10.2 that would mean additional config would need to be made to allow for external HTTP connections? Any advice would be appreciated, thank you

    Read the article

  • PHP+AJAX with MySQL - Query every 2 seconds, too many in TIME_WAIT

    - by Ryan
    I have a basic HTML file, using jQuery's ajax, that is connecting to my polling.php script every 2 seconds. The polling.php simply connections to mysql, checks for ID's newer than my hidden, stored current ID, and then echo's if there is anything new. Since the javascript is connecting every 2 seconds, I am getting thousands of connections in TIME_WAIT, just for my client. This is because my script is re-connecting to MySQL over and over again. I have tried mysql_pconnect but it didn't help any. Is there any way I can get PHP to open 1 connection, and continue to query using it? Instead of reconnecting every single time and making all these TIME_WAIT connections. Unsure what to do here to make this work properly.

    Read the article

  • SQL Server Management Studio Reports: Why no open transactions?

    - by Sleepless
    On a server with several hundred user connections, when I open the SQL Server 2008 SP1 Management Studio report "Database - User Statistics", the result page shows the following results: Login Name: appUser Active Sessions: 243 Active Connections: 243 Open Transactions: 374 Still, when I open the report "Database - All Transactions" on the same DB, it doesn't show any connections ("Currently, there are no transactions running for [Database Name] Database"). What gives? Is this a bug in Management Studio? This is not the only report where this kind of behavious happens... Thanks all!

    Read the article

  • ASP.NET website http requests appear to be queueing

    - by scolemann
    We cloned our servers this weekend into a colo. All non-asp.net sites are performing great, but ASP.NET sites are very slow. It appears to be an issue with the requests/connections, but I cannot figure out where. The reason I think it is a problem with the connections is that when I launch fiddler and watch the requests, all requests appear to happen sequentially. Even the static image requests are taking 5 seconds and another one doesn't start until the first one finishes. MaxConnections is set to 100 in machine.config and the "website connections" are set to unlimited. Any idea what else coudld be causing this? from machine.config:

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

  • How do I remove/uninstall a corporate VPN?

    - by Metro Smurf
    I have a corporate VPN installed on one of my Windows XP systems that I'd like to completely uninstall. There are no programs listed in the add/remove programs dialog matching the corporate VPN name or similar. I can find the VPN being launched from here: C:\Documents and Settings\All Users\Application Data\Microsoft\Network\Connections\Cm\<Company Name> I can see the VPN connection under network connections: Name Type Device Name Connection Manager ------------------- Connection to <Company Name> Connection Manger WAM Miniport (PPTP) Do I just need to delete the connection from Network Connections? And delete the directory? Or?

    Read the article

  • Funnelling http traffic

    - by spencer p
    I have a situation where a large batch of servers (X), on demand, need to request data from a smaller set of web servers (Y). The worst case scenario is if all servers in X decide to fetch different requests to one server in Y. That would be X amount of connections, which could be a very large burst of traffic. The best case scenario is if 1 server in X hit 1 server in Y in tandem. Life does not work like this. One idea to entertain is placing a proxy, similar to squid between X and Y. All of X servers can connect to this proxy, but would result in a few persistent (http keepalive) connections to Y. If The few were say, 3 or 4, then it would funnel. If we could then rate limit those connections and traffic decides to spike unusually high, we wouldn't hurt anyone but ourselves. Thoughts?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >