Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 44/920 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • Extract Key and Certificate from Kemp Loadmaster?

    - by Matt Simmons
    I'm trying very hard to get away from a set of Kemp Loadmasters that I bought years ago to provide HA access to our website. Part of that process is going to be putting the key and certificate in the new solution (HAproxy with nginx doing SSL). Unfortunately, I've come up against a problem... The Kemp has built-in certificate management, and it generates CSR's at the touch of a button. It also supported importing of signed certificates, however it does not, so far as I can tell, allow any kind of export of the key itself. There is a "backup key and certificates" ability, however here's the text from the manual: LoadMaster supports exporting of ALL certificate information. This includes private key, host and intermediate certificates. The export file is designed to be used for import into another LoadMaster and is encrypted. Export and import can be completed using the WUI at Certificates -> Backup/Restore Certs. Please make sure to note the pass phrase used to create the export, it will be required to complete the import. You can selectively resort only Virtual Service certificates including private keys, intermediate certificates or both. Well, that is great, but as for actually DEALING with the certs, I'm apparently out of luck. Of course, I'm not going to give up that easily. I ran "file" on the saved cert bundle and got this: $ file client1.certs.backup client1.certs.backup: gzip compressed data, from Unix Well, awesome, I thought. Maybe it's just a .tar.gz, so I unzipped it, and that went fine, but my attempts to untar it didn't work, and running "file" on it now just gives this: $ file client1.certs.backup client1.certs.backup: data So that's where I'm stuck. Anyone have experience with these?

    Read the article

  • Load-Module equivalient in PowerShell v1

    - by friism
    (Cross-post from Stackoverflow) For reasons of script portability, I need to dynamically load snap-ins in a PowerShell script (ie. I don't want to actually install it). This is easily accomplished in PowerShell v2 with the Load-Module function. I need to run this particular script on a machine where I, for various reasons, do not want to install PowerShell v2, but have v1. Is there a Load-Module equivalent in PowerShell v1?

    Read the article

  • GlusterFS vs Ceph, which is better for production use for the moment?

    - by Mickey Shine
    I am evaluating GlusterFS and Ceph, seems Gluster is FUSE based which means it may be not as fast as Ceph. But looks like Gluster got a very friendly control panel and is ease to use. Ceph was merged into linux kernel a few days ago and this indicates that it has much more potential energy and may be a good choice in the future. I am wondering which(even out of the two?) is a better choice for production use? It would be nice if you could share your practical experiences

    Read the article

  • Campus Network Design - Firewalls

    - by user3081239
    I am designing a campus network, and the design looks like this: LINX is The London Internet Exchange and JANET is Joint Academic Network. My goal is an almost-fully redundant with high availability, because it will have to support about 15k people, including academic staff, administrative staff and students. I have read some documents in the process , but I am still not sure about some aspects. I want to dedicate this one to firewalls: what are the driving factors in deciding to employ a dedicated firewall, instead of an embedded firewall in the border router? From what I can see, an embedded firewall has these advantages: Easier to maintain Better integration One less hop Less space requirement Cheaper Dedicated firewall has the advantage of being modular. Is there anything else? What am I missing?

    Read the article

  • HTTPS Stunnel and Haproxy

    - by panalbish
    I am trying to use stunnel infront of Haproxy for SSL support. SSL certificates are located according to stunnel configuration. I am also able to get the https connection, but every time I use https, session get lost. I am not using tomcat 8443 port to get the secure content. Is is possible to get the https connection only using stunnel and haproxy? And my requirement is to have https connection once user get logged in.

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • NLB 2 Windows Server 2003 Servers

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! Thank you in advance!

    Read the article

  • HAProxy: session stickiness triggered by response header possible?

    - by zoli
    I'm investigating HAProxy as a possible replacement for F5. F5 is capable of persisting a session based on a response header value: when HTTP_RESPONSE { set session [HTTP::header X-Session] if {$session ne ""} { persist add uie $session } } and then route all subsequent requests which contain the same session ID in a header, query parameter, path, etc. to the same machine, eg: when HTTP_REQUEST { set session [findstr [HTTP::path] "/session/" 9 / if {$session} { persist uie $session } } I'm wondering if this is even possible to do with HAProxy?

    Read the article

  • Netscaler - Involving Response Body Replacement with GeoIP Implementation

    - by MrGoodbyte
    I have a Netscaler (10000) NS9.3: Build 52.3.cl There is a document contains a short tutorial about GeoIP database implementation at http://support.citrix.com/article/CTX130701 I have added location database by following this tutorial successfully but instead of dropping connections I need to replace a string in content body. To do that, I created a rewrite action with those parameters. * Name: ns_country_replacement_action * Type: REPLACE_ALL * Target Expression: HTTP.RES.BODY(50000) * Replacement Text: HTTP.RES.HEADER("international") * Pattern: \{\/\*COUNTRY_NAME\*\/\} To insert header called "international", I try to create another rewrite action with those parameters. I'll add this one's policy prior. * Name: ns_country_injection_action * Type: INSERT_HTTP_HEADER * Header Name: international * Header Value: CLIENT.IP.SRC.MATCHES_LOCATION("*.US.*.*.*.*").NOT When I click on create button, it says; Compound expression syntax error, [.*.*").NOT^, 50] I'm not sure but I think that two things might cause this error. The expression I use for MATCHES_LOCATION is wrong but I use same expression in the tutorial. MATCHES_LOCATION().NOT returns BOOLEAN but the field expects STRING. How can I get this work? Do I use right tools to accomplish what I need to do? Thank you. -Umut

    Read the article

  • QoS / PBR Routing Questions

    - by Bernard
    I have a 50Mbs Satellite link and a 10Mbs Microwave link supplying a very remote location. Behind these links, I have a 6,400 seat network - with about 3,000 signed in at any one time. My goal is to send all of the Voip traffic (Google Chat, Magic Jack, Skype, Speakeasy, Vonage, Vonage PC, Yahoo) through the microwave link which has 100ms latency. The rest of the traffic can utilize any remaining bandwidth of the microwave link with excess being diverted to the higher latency (600ms) satellite connection. The problem I've had so far is that most automatic routing configurations weigh the bandwidth heavily for preference - and I'm only wanting latency considered. Additionally, I don't know if this can even be handled with the routing hardware I have at my disposal (Cisco 3640, 3745, & 3845). Any recommendations (or really good starting points) would be greatly appreciated.

    Read the article

  • HAProxy authenticated httpchk (health check)

    - by Markel
    I am using HAProxy on EC2 and using httpchk to manage node availability. I had used a pseudo-unique path as the health check route in an attempt to make sure only my servers responded to the health check. Earlier today I had an EC2 server fall out of existence, and before the haproxy config was auto-regenerated (controller issues), Amazon had reassigned the IP to someone whom 200's every request (honeypot?), my HAProxy host then pulled the server back into rotation and started distributing some of my traffic there until the controller recovered and removed the ip from the list. TLDR; Is there a way to add a server authentication method to HAProxy's httpchk?

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • F5/BigIP rule to redirect affinity-bound users from INACTIVE pool node to other ACTIVE node

    - by j pimmel
    We have several server nodes set up for the end users of our system and because we don't use any kind of session replication in the app servers, F5 maintains affinity for users with the ACTIVE node the client was first bound to. At times when we want to re-deploy the app, we change the F5 config and take a node out of the ACTIVE pool. Gradually the users filter off and we can deploy, but the process is a bit slow. We can't just dump all the users into a different node because - given the update heavy nature of the user activities - we could cause them to lose changes. That said, there is one URL/endpoint - call it http://site/product/list - which we know, when the client hits it, that we could shove them off the INACTIVE node they had affinity with and onto a different ACTIVE node. We have had a few tries writing an F5 rule along these lines, but haven't had much success so i thought I might ask here, assuming it's possible - I have no reason to think it's not based on what we have found so far.

    Read the article

  • apache2 mod_proxy configuration for single threaded servers

    - by The Doctor What
    I have a multiple instances of thin running behind apache 2.2's mod_proxy. The problem I have is that a couple pages, by design, take a while to run. If I just configure apache the obvious way (just add the thin urls as BalanceMember lines and no other configurations) then what happens is if someone clicks on the long-running page, then if enough web requests happen while it is running, someone eventually gets the same thin server and has to wait. Does anyone have some best practices or suggested configuration for mod_proxy and thin? Ciao!

    Read the article

  • Should I use an ssl terminator or just haproxy?

    - by Justin Meltzer
    I'm trying to figure out how to set up my architecture for a socket.io app that will require both https and wss connections. I've found many tutorials on the web suggesting that you use something like stud or stunnel in front of haproxy, which then routes your unencrypted traffic to your app. If I were to go this route, is it suggested that haproxy and the ssl terminator be on separate instances, or is it fine if they are on the same EC2 server instance? If I do not want to use a separate ssl terminator, could I use haproxy to terminate the ssl? Or instead would it be possible to proxy these https and wss connections to my application and have the node app terminate the ssl itself?

    Read the article

  • HAProxy - forward to a different web server based on URI

    - by Saggi Malachi
    I have an HTTP farm with the following configuration: listen webfarm 10.254.23.225:80 mode http balance roundrobin cookie SERVERID insert option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server webA 10.254.23.4:80 cookie A check server webB 10.248.23.128:80 cookie B check I would like to add some option which would forward all requests for a specific URI (i.e /special) to a 3rd web server. How should I do it?

    Read the article

  • How do I set up failover for a single web server using two ISPs?

    - by Travis
    I have one web server and two WAN connections (1 cable, 1 DSL). DNS is run offsite, and points to the IP address assigned by one of the ISPs. How can I have the second connection take over when the primary one fails? I have seen that it is possible to have two A records, each pointing to a different IP, but it has several problems. What's the real solution to this? I imagine this is a very common issue. Thanks in advance!

    Read the article

  • Bridged NIC's but only one active

    - by rockinthesixstring
    I've "Bridged" the NIC's in my Server 2003 box but when I do a large file transfer, I see that only one is active at a time. What do I need to do to spread the love across both NIC's? I'm hoping to increase transfer speeds from my Server to my network. PS: I have a D-Link DGS-1016D Switch.

    Read the article

  • F5 BigIP upgrade from 9.x to 10.x

    - by mbuk2k
    Having a few difficulties upgrading a Big IP 3400 from 9.4.8 to any version 10.x image. The following are the versions I've tried: 10.1.0.3341.0 10.2.2.763.3 10.2.3.112.0 10.2.4.577.0 To upgrade I'm running the following command: image2disk --format=volumes BIGIP-10.1.0.3341.0.iso Obviously replacing the version number with the relevant image I'm trying to upgrade to each time. The F5 reboots, and starts copying packages however after 30 seconds or so just stops copying. The cursor in the console is still flashing but no matter how long it's left, the package doesn't copy. It seems to be a different package with each version/image (but always the same package per version) at point of freezing, which I'm guessing is suggesting a space issue? I've checked free space on the device and it has over 2GB free at root which should surely be enough? If anyone has any advice or pointers, it would be kindly appreciated. Thank you

    Read the article

  • Performance Tuning a High-Load Apache Server

    - by futureal
    I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows: Debian Lenny (all stable packages + patched to security updates) Apache 2.2.9 PHP 5.2.6 Amazon EC2 large instance The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top. Further, running other things on the server during these times (even vi) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay. We have Apache configured as follows, using prefork: StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 And KeepAlive as: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong. Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of 15, our instance CPU utilization is between 75-80%. Example output from top: top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56 Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers Swap: 0k total, 0k used, 0k free, 3750664k cached The majority of the processes look like: 24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2 24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2 24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2 24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2 24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2 Example output from vmstat at the same time as the above: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9 6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15 23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0 23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0 20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0 6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1 0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24 2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5 9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2 3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0 And finally, output from Apache's server-status: Server uptime: 31 days 2 hours 18 minutes 31 seconds Total accesses: 60102946 - Total Traffic: 974.5 GB CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load 22.4 requests/sec - 380.3 kB/second - 17.0 kB/request 107 requests currently being processed, 6 idle workers C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK... KK_KWKKKWKCKCWKK.KKKCK.......................................... ................................................................ From my limited experience I draw the following conclusions/questions: We may be allowing far too many KeepAlive requests I do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive value myself yet.

    Read the article

  • High Availability with 2 servers?

    - by Tom R
    Is it possible to have a high availability setup with 2 servers? Running Windows Web Server 2008 and MSSQL Web Edition (as I know Express isn't supported)? Getting to the point where our one dedicated server needs scaled out and going to a second server already more than doubles the cost as need to use Web Edition rather than Express (db is only 500MB).

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >