Search Results

Search found 814 results on 33 pages for 'balancing'.

Page 3/33 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • Load balancing without a load balancer?

    - by Tom
    I have 4 nginx-powered image servers on their own subdomains which users would access at random. I decided to put them all behind a HAProxy load balancer to improve the reliability and to see the traffic statistics from a single location. It seemed like a no-brainer. Unfortunately, the move was a complete failure as the load balancer's 100mbit port was completely saturated with all requests now going through it. I was wondering what to do about this - I could get a port upgrade ($$) or return to 4 separate image servers that are randomly accessed. I thought about putting HAProxy on each image server which would in turn route to another image server if that server's nginx service was having trouble. What would you do? I would like to not have to spend too much additional money.

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • Load balancing with Cisco router

    - by you8301083
    I have a Cisco router with two bonded T1's which are setup as a VPN to the main office. We need more bandwidth but can't get other connections (or it's too costly), so I would like to have a dsl connection installed. This DSL connection will run over a VPN to the same main office, but it won't be bonded with the T1's - so it won't act as a single connection. Since the three circuits won't act as a single connection (basically would be two connections 2 T1's + 1 DSL) we would have to split the network in half - but I don't want to do that. Instead, would it be possible to send all HTTP/HTTPS over the DSL connection but send all mission critical data (such as voice/active directory) over the T1's? I basically want to send specific ports over DSL and everything else over the T1's without separating half of the users traffic over the DSL and the rest over the T1's.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • ARR servers in the Load Balancing pool automatically go from unavailable to available

    - by Chris
    I have 3 IIS web servers in an ARR web farm. When we do rolling releases, we take one server offline as a backup server and move it into an "Unavailable State" I have noticed that with ARR, servers will not stay in this state...they come back online automatically hours or days later. Does anyone know how to remedy this situation? This is very bad as the server that is down is typically not running the correct version of our code. I need to keep a server unavailable until i tell it otherwise.

    Read the article

  • Load Balancing Linux Web Services and Change Config Without Restart

    - by Eric J.
    What options are available to load balance web service traffic on Linux with the ability to add or remove servers from the server pool without restarting the load balancer? This post: http://serverfault.com/questions/71437/mod-proxy-change-without-restart looks like a very promising way to switch between two servers, but I don't know enough about mod_proxy and mod_rewrite to understand how/if I can use an external file to specify the BalancerMember entries for a section. Are there other open source load balancers that support reconfiguration without restart?

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • Network load balancing, efficience and limits?

    - by Vimvq1987
    I'm about to study about NLB on Windows Server 2003. It archives both of my interests now: scalability and high-availability. But I don't know about its power in production environment. Is NLB a efficient solution? How does it implement in real-world? Is it popular? What are its limit? Thank you so much for answering my questions. :)

    Read the article

  • Load balancing + NAT issue on BNT GBE 2-7 gear

    - by Clément Game
    Hi guys, I've got troubles configuring an Hardware load-Balancer with NAT functions. I have the following architecture: Internet === VIP (public) LB (private ip) ==== private addressed servers When a connection is initialised from the outside (internet) , the LB correctly forwards the SYN packet to one of the private servers. But when these servers want to reply with a SYN/ACK there is a problem. the initial SYN packet had as ip header : VIP = Private_server_Address But the private servers cannot reach VIP from their side (this is normal since it's nated), and then provide a correct reply. Have you guys any solution to correctly forward the packets to their correct destination ? Note: The load balancer, which is the default gw for the servers, also has a NAT rule for "masquerading" (actually more SNAT than real masquerading) Regards, Clément.

    Read the article

  • load balancing between physical and virtual machines

    - by fefe
    First of all sorry if my question would be not relevant, I'm quite beginner. In short: I have 2 physical machines - first(Windows Server 2007, Apache 2.2) on the second machine esxi installed to host virtual machines . I have been converted my physical machine(1) on esxi(2) and in the next step I would like to deploy a load balancer between the physical and virtual machine. Would be this workaround appropriate? If yes what are the steps to follow?

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • PHP and load balancing

    - by StCee
    I have one major domain but the server spec behind it is not good enough. Hence I want to relay the traffic, in particular php-mysql queries to multiple smaller servers. How is that normally be done? (BTW I wonder how much traffic or number of php/mysql request a normal setup on ec2 micro instance can handle? ) I did have a look of EC2 load balancer. But is it only possible to load balance on machines of your own account?

    Read the article

  • HTTPS load balancing based on some component of the URL

    - by user38118
    We have an existing application that we wish to split across multiple servers (for example: 1000 users total, 100 users split across 10 servers). Ideally, we'd like to be able relay the HTTPS requests to a particular server based on some component of the URL. For example: Users 1 through 100 go to http://server1.domain.com/ Users 2 through 200 go to http://server2.domain.com/ etc. etc. etc. Where the incoming requests look like this: https://secure.domain.com/user/{integer user # goes here}/path/to/file Does anyone know of an easy way to do this? Pound looks promising... but it doesn't look like it supports routing based on URL like this. Even better would be if it didn't need to be hard-coded- The load balancer could make a separate HTTP request to another server to ask "Hey, what server should I relay to for a request to URL {the URL that was requested goes here}?" and relay to the hostname returned in the HTTP response.

    Read the article

  • Windows/.NET Load Distribution & Balancing

    - by andrewbadera
    Hi all, Is there a vetted Windows-friendly, or even .NET-native, load-distributing/load-balancing utility out there along the lines of HA Proxy? We have a .NET stack product, and the one piece that we step out of the stack is for load-balancing. We need something with configurable rules for distribution -- perhaps subdomain-driven -- that NLB alone doesn't seem to offer. If it integrates directly with .NET, or offers an exposed API callable by webservices, so much the better! Thanks in advance! Clarification: we need to logically part over boxes. This is not just a cluster/failover/replication scenario.

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • DNS servers and load balancing

    - by RadiantHex
    Hi there! I'm wondering if a simple DNS server could offer, even a limited amount, of load balancing capability. I have a couple of servers and I've been told that multiple IPAddresses can be associated with one domain. Help would be very much appreciated!

    Read the article

  • Elastic Load Balancing in EC2

    - by Dom
    Hi Its been on the cards for a while, but now that Amazon have released Elastic Load balancing what are your thoughts on deploying this solution for a high-traffic web app. Should we replace HAProxy or consider ELB as a complimentary service in front of HAProxy Appreciate any comments or suggestions Thanks Dom

    Read the article

  • Load balancing, connection loop

    - by Iapilgrim
    Hi all, I've set up load balancing using Apache, Tomcat through ajp connector. Here the config Apache/httpd.conf BalancerMember ajp://10.0.10.13:8009 route=osmoz-tomcat-1 retry=5 BalancerMember ajp://10.0.10.14:8009 route=osmoz-tomcat-2 retry=5 < Location / Allow From All ProxyPass balancer://tomcatservers/ stickysession=JSESSIONID nofailover=off ProxyPassReverse / < /Location When I try to access https://mycms.net/apps/bo/signin I see the connection loop forever ( redirection forever). I don't know why. This happens sometimes. So my question are: Is there any way Apache make connection loop? Is there a tool help me to monitor the connection? My question may not clear but I don't have any glue right now. Thanks

    Read the article

  • Create CRM Organizations on Load Balancing network

    - by user82613
    I'm trying to understand how to create CRM Organization on Load Balancing network. I've three web servers (Web01, Web 02, Web03); three application servers (App01, App02, App03) and a SQL Server (SQL01). I already have Load Balancer setup and there is already one organizaiton setup by someone on all web servers. This organization is Internet Facing. Now I want to create one more Organization on same set of Web Servers. Can anyone please help me understand how to setup new Organization on Load Balancer in this scenario?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >