Search Results

Search found 17911 results on 717 pages for 'load balancing'.

Page 5/717 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • use Jquery load to load content in adiv?

    - by Khalid Omar
    simply i'm doing a test i have a div called test and mvc action in the client controler the view and the controler public string testout() { return DateTime.Now.ToString(); } and i'm using jquery to update the div $("#B1").live("click", function() { $("#test").load("/client/testout"); return false; }); first time a click the bottun i see the date and time in the div test second time i click the botton nothing changed

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • Clustering with Vsphere and Apache load balacing

    - by Anonymous2011
    I was looking at Vsphere to automatically load balance my apache web servers, and mysql servers but will this actually do the job? I know it says it does auto load balancing but not actually sure it quite means what I want to achieve. Is there an easier way to set up a php-apache and mysql cluster within virtual machines? Or are there any guides to clustering? (I have tried googling without much luck) Any help towards understanding/setting up clustering and load balancing within virtual machines appreciated.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • What other ways can I load balance EC2 servers without using Elastic Load Balancing?

    - by undefined
    I have a web application that consists of a web server managed by a web hosting firm, a set of EC2 instances in amazons cloud and a MySQL database (hosted on the webserver). MySQL is behind a firewall and is set to allow access from Localhost and from a single IP address which is an Amazon Elastic IP address that is attached to the EC2 instance I have been running up to now. The problem is that I want to look at my scaling up and load balancing strategy for my EC2 instance. To this end I have been investigating the Elastic Load Balancers and Autoscaling tools that Amazon provides and have managed to set this up fine but for one thing - connecting to the MySQL database running on my webserver. I realised (thanks to answers on Serverfault) that I needed to check firewall settings and add the IP address for the load balancer, however Elastic Load Balancers provide you with a DNS name, not an IP address and infact the IP addresses change over time so this will not work. I have been told by the company hosting the database that the way the firewall works is to look up the IP address of the DNS name and store the IP rather than the DNS name. so basically this will not work and the only way to allow access would be to open up the SQL port to allow access from anyone! Is this a viable idea? Should I look at moving my database into the cloud? Is there another firewall that the server company can use? Should I find another way of load balancing (if so what?) tricky one eh? any help appreciated!

    Read the article

  • High CPU load - Ubuntu 14.04

    - by watt
    I noticed that sometimes when browsing (with other processes in the background), I get very high CPU load for the browser process (over 100%) and the computer becomes really slow. I tried switching from Firefox (with just a few extensions) to Chromium, but same thing happens without me visiting graphics-intense sites, flash sites or anything like that. I also noticed python or node (when running "make") produce the same high CPU load from time to time so this is not necessarily browser-related. When I only have a browser open, it doesn't seem to happen and everything is fine in Windows 7. I switched from unity to gnome3 with no effect. Specs: lenovo w510 (4gb RAM, i7 q820 @ 1.73) + up to date Ubuntu 14.04 64bit. Printscreen: http://imgur.com/8MZJNKC Do you guys have any idea why this might happen? Please let me know if there's other info you need. Thanks!

    Read the article

  • alsHigh CPU load - Ubuntu 14.04

    - by watt
    I noticed that sometimes when browsing (with other processes in the background), I get very high CPU load for the browser process (over 100%) and the computer becomes really slow. I tried switching from Firefox (with just a few extensions) to Chromium, but same thing happens without me visiting graphics-intense sites, flash sites or anything like that. I also noticed python or node (when running "make") produce the same high CPU load from time to time so this is not necessarily browser-related. When I only have a browser open, it doesn't seem to happen and everything is fine in Windows 7. I switched from unity to gnome3 with no effect. Specs: lenovo w510 (4gb RAM, i7 q820 @ 1.73) + up to date Ubuntu 14.04 64bit. Printscreen: http://imgur.com/8MZJNKC Do you guys have any idea why this might happen? Please let me know if there's other info you need. Thanks!

    Read the article

  • New host, high load?

    - by dotancohen
    A few minutes ago I signed up at a new webhost. I have yet to move my sites over. Upon initial SSH connection, I checked the load and memory usage, they do seem rather higher than I would like: # uptime 12:06:51 up 71 days, 23:23, 1 user, load average: 9.02, 9.49, 9.45 # free total used free shared buffers cached Mem: 33014800 31927192 1087608 0 2384812 17729816 -/+ buffers/cache: 11812564 21202236 Swap: 16787916 8584 16779332 Is that a bit to packed? I'm only paying about $5 USD per month, so I don't expect <0.1 loads, but ~10 is worrisome. Is it not? Also, there is no /etc/issue file so I tried other methods to guess the OS: # uname -a Linux box358.bluehost.com 2.6.32-20120131.55.1.bh6.x86_64 #1 SMP Tue Jan 31 15:43:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux # which yum /usr/bin/yum # which apt-get # That looks like CentOS / RHEL 6.2 possibly?

    Read the article

  • Load balancing using Mina example with Java DSL

    - by Flame_Phoenix
    So, recently I started learning Camel. As part of the process I decided to go through all the examples (listed HERE and available when you DOWNLOAD the package with all the examples and docs) and to see what I could learn. One of the examples, Load Balancing using Mina caught my attention because it uses a Mina in different JVM's and it simulates a load balancer with round robin. I have a few problems with this example. First it uses the Spring DSL, instead of the Java DSL which my project uses and which I find a lot easier to understand now (mainly also because I am used to it). So the first question: is there a version of this example using only the Java DSL instead of the Spring DSL for the routes and the beans? My second questions is code related. The description states, and I quote: Within this demo every ten seconds, a Report object is created from the Camel load balancer server. This object is sent by the Camel load balancer to a MINA server where the object is then serialized. One of the two MINA servers (localhost:9991 and localhost:9992) receives the object and enriches the message by setting the field reply of the Report object. The reply is sent back by the MINA server to the client, which then logs the reply on the console. So, from what I read, I understand that the MINA server 1 (per example) receives a report from the loadbalancer, changes it, and then it sends that report back to some invisible client. Upon checking the code, I see no client java class or XML and when I run, the server simply posts the results on the command line. Where is the client ?? What is this client? In the MINA 1server code presented here: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="service" class="org.apache.camel.example.service.Reporting"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="mina1"> <from uri="mina:tcp://localhost:9991"/> <setHeader headerName="minaServer"> <constant>localhost:9991</constant> </setHeader> <bean ref="service" method="updateReport"/> </route> </camelContext> </beans> I don't understand how the updateReport method magically prints the object on my console. What if I wanted to send message to a third MINA server? How would I do it? (I would have to add a new route, and send it to the URI of the 3rd server correct?) I know most of these questions may sound dumb, but I would appreciate if anyone could help me. A Java DSL version of this would really help me.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Heuristic algorithm for load balancing among threads.

    - by Il-Bhima
    I'm working on a multithreaded program where I have a number of worker threads performing tasks of unequal length. I want to load-balance the tasks to ensure that they do roughly the same amount of work. For task T_i I have a number c_i which provides a good approximation to the amount of work that is required for that task. I'm looking for an efficient (O(N) N = num tasks or better) algorithm which will give me "roughly" a good load balance given the values of c_i. It doesn't have to be optimal, but I would like to be able to have some theoretical bounds on how bad the resulting allocations are. Any ideas?

    Read the article

  • Load balancing, connection loop

    - by Iapilgrim
    Hi all, I've set up load balancing using Apache, Tomcat through ajp connector. Here the config Apache/httpd.conf BalancerMember ajp://10.0.10.13:8009 route=osmoz-tomcat-1 retry=5 BalancerMember ajp://10.0.10.14:8009 route=osmoz-tomcat-2 retry=5 < Location / Allow From All ProxyPass balancer://tomcatservers/ stickysession=JSESSIONID nofailover=off ProxyPassReverse / < /Location When I try to access https://mycms.net/apps/bo/signin I see the connection loop forever ( redirection forever). I don't know why. This happens sometimes. So my question are: Is there any way Apache make connection loop? Is there a tool help me to monitor the connection? My question may not clear but I don't have any glue right now. Thanks

    Read the article

  • Load balancing and shedulling algorithms .NET

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs), I can predict how long approx. each job will take to be caclulated. Also I have priorities. My question is how to keep all machines loaded 99-100% and shedule the jobs in the best way. Each machine can do several calculations at the time. Jobs are pushed to the machine. Central machine knows current load of each machine. Also I would like to to assign some king of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How to distribute jobs(calculations) in the best possible way, also keep in mind priority. Any suggestions ? Ideas ? Algorithms ?

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • High load (and high temp) with idle processes

    - by Nanne
    I've got a semi-old laptop (toshiba satellite a110-228), that's appointed 'laptop for the kids' by my sister. I've installed ubuntu netbook (10.10) on it because of the lack-of memory and it seems to work fine, accept from some heat-issues. These where never a problem under windows. It looks like I've got something similar to this problem: Load is generally 1 or higher, sometimes its stuck at 0.80, but its way to high. Top/htop only show a couple of percentage CPU use (which isn't too shocking, as i'm not doing anything). At this point all the software is stock, and i'd like to keep it that way because its supposed to be the easy-to-maintain kids computer. Now I'd like to find out: What could be the cause of the high load? Could it be as this thread implies, some driver, are there other options to check? How could I see what is really keeping the system hot and bothered? How to check what runs, etc etc? I'd like to pinpoint the culprint. further steps to take for debugging? The big bad internet leads me to believe that it might be the graphics drivers. The laptop has an Intel 945M chipset, but that doesn't seem to be one of the problem childs in this manner (I read a lot abotu ATI drivers that need special isntall). I'd not only welcome hints to directly solve this (duh) but also help in starting to debug what is going on. I am really hesitant in installing an older kernel, as I want it to be stock, and easy upgradeable (because I don't live near it, it should run without me ;) ) As an afterthought: to keep the whole thing cooler, can I 'amp up' the fancontrol? Its only going "airplane" mode when the computer is 95 Celcius, which is a tad late for my taste. Top: powertop:

    Read the article

  • A generic error occurred in GDI+ + ABDPdf + Load Balacing

    - by jalpesh
    We are using two load balancing server for asp.net site in that we have a functionality which will create a receipt of order in pdf using abcpdf component it was working fine without load balancing server and but when we move it to load balancing server it is giving errors like. A generic error occurred in GDI. I have given full rights to directory which is used but still there problem. Does anybody have a solutions for this.

    Read the article

  • is it worth to use load balancer on web server/website

    - by user427969
    I have a website and a while ago, the web server of the company hosting my website was down for about a day. I consulted the company for a solution on how i can stop this from happening in future and they suggested to have a second machine and which will be connected to my current website/web server by a "load balancer" (at an additional huge cost!!!). The second machine will be replicate of the first one and so if i goes down, the other will always be running. ---- Explanation ----- My hosting company suggested that it will be a good idea to have a second machine running at the same time and both the machines will be connected by a load balancer which reduces the rist of a downtime. The second machine will be a mirror of the first and any changes to first must be replicated in the second. I don't mind spending money if it really saves my website from going down. I want to know is it worth having this "load balancer" for my purpose? My website is a 24/7 service. I cannot afford an outage of 24 hours/1 hour. I don't mind using this "load balancer" as far as it is really worth. I am not sure if its just a marketing trick of my hosting company or really a "best" solution Thanks for help. Regards

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Load balance to proxies

    - by LoveRight
    I have installed several proxy programs whose IP addresses are, for example, 127.0.0.1:8580(use http), 127.0.0.1:9050(use socks5). You may regrard them as Tor and its alternatives. You know, certain proxy programs are faster than others at times, while at other times, they would be slower. The Firefox add-in, AutoProxy and FoxyProxy Standard, can define a list of rules such as any urls matching the pattern *.google.com should be proxied to 127.0.0.1:8580 using socks5 protocol. But the rule is "static". I want *.google.com to be proxied to the fastest proxy, no matter which one. I think that is kind of load balancing. I thought I could set a rule that direct request of *.google.com to the address the load balancer listens, and the load balancer forwards the request to the fastest real proxy. I notice that tor uses socks5 protocol and some other applications use http. I feel confused that which protocol should the load balancer use. I also start to wonder about the feasibility of this solution. Any suggestions? My operating system is Windows 7 x64.

    Read the article

  • Is there any way to test how will the site perform under load

    - by Pankaj Upadhyay
    I have made an Asp.net MVC website and hosted it on a shared hosting provider. Since my website surrounds a very generic idea, it might have number of concurrent users sometime in future. So, I was thinking of a way to test my website for on-load performance. Like how will the site perform when 100 or 1000 users are online at the same time and surfing the website. This will also make me understand whether my LINQ queries are well written or not.

    Read the article

  • Apache load balancer limits with Tomcat over AJP

    - by PAS
    Hi All, I have Apache acting as a load balancer in front of 3 Tomcat servers. Occasionally, Apache returns 503 responses, which I would like to remove completely. All 4 servers are not under significant load in terms of CPU, memory, or disk, so I am a little unsure what is reaching it's limits or why. 503s are returned when all workers are in error state - whatever that means. Here are the details: Apache config: <IfModule mpm_prefork_module> StartServers 30 MinSpareServers 30 MaxSpareServers 60 MaxClients 200 MaxRequestsPerChild 1000 </IfModule> ... <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> # Tomcat HA cluster <Proxy balancer://mycluster> BalancerMember ajp://10.176.201.9:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.201.10:8009 keepalive=On retry=1 timeout=1 ping=1 BalancerMember ajp://10.176.219.168:8009 keepalive=On retry=1 timeout=1 ping=1 </Proxy> # Passes thru track. or api. ProxyPreserveHost On ProxyStatus On # Original tracker ProxyPass /m balancer://mycluster/m ProxyPassReverse /m balancer://mycluster/m Tomcat config: <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Engine> </Service> </Server> Apache error log: [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.10:8009 (10.176.201.10) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.10) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.10 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.9:8009 (10.176.201.9) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.9) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.9 [Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.219.168:8009 (10.176.219.168) failed [Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.219.168) [Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.219.168 [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state [Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state Load balancer top info: top - 23:44:11 up 210 days, 4:32, 1 user, load average: 0.10, 0.11, 0.09 Tasks: 135 total, 2 running, 133 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.2%id, 0.1%wa, 0.0%hi, 0.1%si, 0.3%st Mem: 524508k total, 517132k used, 7376k free, 9124k buffers Swap: 1048568k total, 352k used, 1048216k free, 334720k cached Tomcat top info: top - 23:47:12 up 210 days, 3:07, 1 user, load average: 0.02, 0.04, 0.00 Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2097372k total, 2080888k used, 16484k free, 21464k buffers Swap: 4194296k total, 380k used, 4193916k free, 1520912k cached Catalina.out does not have any error messages in it. According to Apache's server status, it seems to be maxing out at 143 requests/sec. I believe the servers can handle substantially more load than they are, so any hints about low default limits or other reasons why this setup would be maxing out would be greatly appreciated.

    Read the article

  • Load balanced asp.net websites and required memory usage

    - by Matt
    Each of my servers has 8Gb RAM and the memory usage hovers around 7Gb. I have a load balancer available to me but at the moment I'm worried that putting my sites through it will cause the platform to fall over. The load balancer would be configured with a sticky round-robin where a new connection is round robin but subsequent connections for the same source ip will remain on the same server (until a limit is reached). Thats all standard stuff. How do I know what memory usage my sites will need across the platform when I put them through the load balancer? Rather than knowing that a site is using 150mb on a particular server I could face a situation where the 150mb is taken up on each of the servers. I know that with only 1 gb free I could have a serious problem on my hands. If I free up some memory then how can I work out what I need to have free to prevent this from happening? Thanks Matt

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >