Search Results

Search found 841 results on 34 pages for 'balance'.

Page 25/34 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • HAproxy with MySQL Master-Master Replication incredibly slow

    - by Yayap
    I have two MySQL servers in multi-master mode, with an HAproxy machine for simple load balancing/redundancy. When I am connected to one of the servers directly and try to update about 100,000 entries, it is completed including replication in about half a minute. When connecting through the proxy it takes usually over three whole minutes. Is it normal to have that type of latency? Is something amiss with my proxy configuration (included below)? This is getting really frustrating as I assumed the proxy would do some sort of load balancing, or at least have little to no overhead. #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 # chroot /var/lib/haproxy # pidfile /var/run/haproxy.pid maxconn 4096 user haproxy group haproxy daemon #debug #quiet # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global #option tcplog option dontlognull option tcp-smart-accept option tcp-smart-connect #option http-server-close #option forwardfor except 127.0.0.0/8 #option redispatch retries 3 #timeout http-request 10s #timeout queue 1m timeout connect 400 timeout client 500 timeout server 300 #timeout http-keep-alive 10s #timeout check 10s maxconn 2000 listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option tcpka option httpchk server db01 192.168.15.118:3306 weight 1 inter 1s rise 1 fall 1 server db02 192.168.15.119:3306 weight 1 inter 1s rise 1 fall 1

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • AD domain on web servers behind NAT - DNS issues?

    - by Ant
    I'm trying to setup an AD domain to manage the security between two Windows Server 2008 webservers that will sooner or later use NLB to balance website requests. I've hit a problem which I think is a simple solution and is down to DNS. My website domain is mydomain.com. The two servers are running behind a NAT firewall on the 10.0.0.0 IP range. I've setup the AD domain to be called ad.mydomain.com (as recommended by MS and a few other answers to questions on here). The second web server however doesn't want to join the domain, and gives an error pinning the problem on DNS - "ensure that the domain name is typed correctly" even though it queries the SRV record successfully and gets the correct DC back - dc.ad.mydomain.com. Doing a dcdiag /test:dns on the DC gives the Delegation error 'DNS Server dc.mydomain.com Missing glue A record'. I have a feeling I need to add something to the public DNS so that it in some way knows about ad.mydomain.com. Can anyone suggest whether I'm on the right track in adding something to the public DNS? Or whether it's something else? Many thanks

    Read the article

  • Bandwidth Suggestion

    - by Campo
    I have been asked to analyze the bandwidth usage of a company and make a recommendation for upgrading their Internet connection(s). Here is the layout 3 DLS lines so it is 3x(6 Down, 1 Up Each) into a load balancer out to the office's network. 30 VOIP phones run on a T1 (1.5 Down, 1.5 Up) The users at the company are heavily uploading. It is my suspicion that the issue in slowdown is being cause by multiple people uploading and others not being able to get requests out for even simple http requests. My initial idea is to get them a fiber line with a 10 down and 10 up. What do others think on this plan? Will that be enough to host their network traffic? What do I do about the VOIP line afterward? The fiber is expensive and I know the T1 does a great job for their VOIP so I do not want to suggest a DSL line because I know it may not be sufficient. I would also like to save them some money if I can. Maybe even get a faster fiber line and forgo the T1. Though I know their load balance/switch can only handle 20MB/S throughput. Looking for some confirmation/suggestions on my plan. I am planning on going in to get some real diagnostic numbers. Any suggestions on software to use for that? Preferably Windows software.

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

  • Experiences in Upgrading from Exchange 2003 to Exchange 2010

    - by gWaldo
    I'm currently running Exchange 2003 SP2 Cluster on a Server 2003 AD Forest (in native 2003 mode), and we beginning to plan the upgrade to Server 2008 AD and Exchange 2010. We have two main sites, one middle-sized office, and a couple of smaller sites which have DCs (which may be RODCs after the upgrade). Currently all of our Exchange cluster is in my main site, but we are considering using the new datastore paradigm for load-balance/failover at the other large site, but this is not set in stone. Right now we are in the information-gathering and planning phases. I am looking for input of any gotchas experienced while performing either upgrade, but especially the Exchange upgrade. Gotchas? What surprised you? What wasn't documented? What said one thing but was misleading? (Confusing either in content or severity.) What is great or horrible about the new system? What worked well? What worked poorly? If you were to do it over again...? (I know that this isn't so much a question that can be definitively answered, but I'm happy to reward insight and useful resources (not the Microsoft documentation, but Blogposts are welcome) with upvotes.) UPDATE A couple items of note: -We are not currently using OWA (currently only the admins), but it may become more of a consideration with iOS devices. -We do have a small number of Blackberries in the environment (< 10%). -In addition to the standard Exchange connectors, we have a third-party connector for Captaris RightFax integration.

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • DNS Round-robin, Load Balancing, Load sharing, and failover in 2012

    - by user1089770
    I have been reading many posts on serverfault as well as on other sites regarding all these. What I understand is, Multiple A records(round-robin dns) can be used for both : Load sharing (round-robin, but NOT load-balancing). Many people say that “Load Balancing” but I think there will be no load-balancing because “Balance” means (literally) “compare two(or more) and adjust” (and that is what Real s/w or h/w Load balancers do) but Browsers never do this, instead they randomly select and IP and connect to it. It doesn't have any knowledge about the current load of that server (probably, the IP it picked had the highest load!). Automatic failover (latest browsers only). Yes, I think DNS can be used as a simple failover system (at least in 2012, I dont know when it actually "came in effect"). please refer to : http://webmasters.stackexchange.com/questions/10927/using-multiple-a-records-for-my-domain-do-web-browsers-ever-try-more-than-one and Browser-based DNS failover using multiple A records and http://www.nber.org/sys-admin/dns-failover.html I would like to make sure my assumptions/findings are right. So let me know please.....

    Read the article

  • Nginx, HAproxy, Unicorn, Rails and Node settings

    - by Julien Genestoux
    Our application is currently only a "regular" web app, with no fancy things like streaming HTTP or websockets. It's mostly a Rails app, served by a few (20 on 2 machines) Unicorn workers, proxied by a venerable nginx server which deals with load balancing. This has been working quite well for the past year and the app now serves between 400 and 800 requests per second at any point during the day. We're soon releasing 2 new APIs, which are both served by a Node application : a websocket one, as well as a long polling HTTP one. (the fancy thing like the Twitter streaming API where HTTP connections never end). They both use the same port on node and since the node app is stateless, we can certainly deploy a few of them to handle the traffic. The app (node) is now deployed in 5 instances and are now listening on 5 different 'private' ports on the same host. We need to put something in front of them to load balance, but also something that is able to deal with sockets (either websocket or HTTP streaming) which are intended to stay 'up' for days. The question is then : what? I read somewhere that HAProxy does a better job than Nginx at this. What do you recommend?

    Read the article

  • pfSense Load Balancer and Virtual IP

    - by jshin47
    I have two identical web servers on 10.2.1.13 and 10.2.1.113. I would like to set up pfSense load balancer to balance requests to both of these. I set up pools that included HTTP and HTTPS for both of these hosts, then set up virtual servers that responded on HTTP and HTTPS and referred traffic to its respective pool. However, I set up the virtual server to listen on 10.2.1.213, a LAN IP rather than a WAN IP, because I want LAN traffic to be able use the load balancer virtual server as well. So, I set up a Virtual IP for 10.2.1.213 on LAN IP, and a NAT port forwarding rule for HTTP and HTTPS traffic on a WAN IP to forward to 10.2.1.213. It seems like this should work, but it fails. What eventually happens is that when I try to access the page from WAN, I am directed to the login page for my pfSense device rather than the page I am expecting. When I try to access 10.2.1.213 from LAN, the request times out. What is going wrong here? I have tried it with and without NAT reflection to no avail. Please advise

    Read the article

  • JTextField only shows as a slit Using GridBagLayout, need help

    - by Bill Caffery
    Hi thank you in advance for any help, I'm trying to build a simple program to learn GUI's but when I run the code below my JTextFields all show as a slit thats not large enough for even one character. cant post an image but it would look similar to: Label [| where [| is what the text field actually looks like import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.*; public class lab6start implements ActionListener { JTextField custNameTxt; JTextField acctNumTxt; JTextField dateCreatedTxt; JButton checkingBtn; JButton savingsBtn; JTextField witAmountTxt; JButton withDrawBtn; JTextField depAmountTxt; JButton depositBtn; lab6start() { JFrame bankTeller = new JFrame("Welcome to Suchnsuch Bank"); bankTeller.setSize(500, 280); bankTeller.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); bankTeller.setResizable(false); bankTeller.setLayout(new GridBagLayout()); bankTeller.setBackground(Color.gray); //bankTeller.getContentPane().add(everything, BorderLayout.CENTER); GridBagConstraints c = new GridBagConstraints(); JPanel acctInfo = new JPanel(new GridBagLayout()); c.gridx = 0; c.gridy = 0; c.gridwidth = 2; c.gridheight = 1; c.insets = new Insets(5,5,5,5); bankTeller.add(acctInfo, c); c.gridwidth = 1; //labels //name acct# balance interestRate dateCreated JLabel custNameLbl = new JLabel("Name"); c.gridx = 0; c.gridy = 0; c.insets = new Insets(0,0,0,0); acctInfo.add(custNameLbl, c); custNameTxt = new JTextField("customer name",50); c.gridx = 1; c.gridy = 0; c.insets = new Insets(5,5,5,5); acctInfo.add(custNameTxt,c); custNameTxt.requestFocusInWindow(); JLabel acctNumLbl = new JLabel("Account Number"); c.gridx = 0; c.gridy = 1; c.insets = new Insets(5,5,5,5); acctInfo.add(acctNumLbl,c); acctNumTxt = new JTextField("account number"); c.gridx = 1; c.gridy = 1; c.insets = new Insets(5,5,5,5); acctInfo.add(acctNumTxt,c); JLabel dateCreatedLbl = new JLabel("Date Created"); c.gridx = 0; c.gridy = 2; c.insets = new Insets(5,5,5,5); acctInfo.add(dateCreatedLbl,c); dateCreatedTxt = new JTextField("date created"); c.gridx = 1; c.gridy = 2; c.insets = new Insets(5,5,5,5); acctInfo.add(dateCreatedTxt,c); //buttons checkingBtn = new JButton("Checking"); c.gridx = 0; c.gridy = 3; c.insets = new Insets(5,5,5,5); acctInfo.add(checkingBtn,c); savingsBtn = new JButton("Savings"); c.gridx = 1; c.gridy = 3; c.insets = new Insets(5,5,5,5); acctInfo.add(savingsBtn,c); //end of info panel JPanel withDraw = new JPanel(new GridBagLayout()); c.gridx = 0; c.gridy = 1; c.insets = new Insets(5,5,5,5); bankTeller.add(withDraw, c); witAmountTxt = new JTextField("Amount to Withdraw:"); c.gridx = 0; c.gridy = 0; c.insets = new Insets(5,5,5,5); withDraw.add(witAmountTxt,c); withDrawBtn = new JButton("Withdraw"); c.gridx = 1; c.gridy = 0; c.insets = new Insets(5,5,5,5); withDraw.add(withDrawBtn,c); //add check balance //end of withdraw panel JPanel deposit = new JPanel(new GridBagLayout()); c.gridx = 1; c.gridy = 1; c.insets = new Insets(5,5,5,5); bankTeller.add(deposit, c); depAmountTxt = new JTextField("Amount to Deposit"); c.gridx = 0; c.gridy = 0; c.insets = new Insets(5,5,5,5); deposit.add(depAmountTxt,c); depositBtn = new JButton("Deposit"); c.gridx = 1; c.gridy = 0; c.insets = new Insets(5,5,5,5); deposit.add(depositBtn,c); bankTeller.setVisible(true); // action/event checkingBtn.addActionListener(this); savingsBtn.addActionListener(this); withDrawBtn.addActionListener(this); depositBtn.addActionListener(this); } public void actionPerformed(ActionEvent e) { if (e.getSource()== checkingBtn) { witAmountTxt.requestFocusInWindow(); //checking newcheck = new checking(); } } } /* String accountType = null; accountType = JOptionPane.showInputDialog(null, "Checking or Savings?"); if (accountType.equalsIgnoreCase("checking")) { checking c_Account = new checking(); } else if (accountType.equalsIgnoreCase("savings")) { // savings s_Account = new savings(); } else { JOptionPane.showMessageDialog(null, "Invalid Selection"); } */

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • limiting connections from tomcat to IIS - proxy? iptables?

    - by Chris Phillips
    Howdy, I've webapp on tomcat6 which is connecting to an M$ PlayReady DRM instance on IIS6.0 The performance is seen to be best when we bench mark (using ab) the DRM service with 25 concurrent connections, which gives about 250 requests per second, which is ace. higher concurrent connections results in TCP/IP timeouts and other lower level mess. But there is no way to control how the tomcat app connects to the service - it's not internally managing a pool of connections etc, they are all isolated http connections to the server. Ideally I'd like a situation where we can have 25 http 1.1 connections being kept alive permanently from tomcat and requesting the licenses through this static pool of connections, which I think would the best performance. But this is not in the code, so was looking for a way to possibly simulate this at the Linux level. I was possibly thinking that iptables connlimit might be able to gracefully handle these connections, but whilst it could limit, it'd probably still annoy the app. What about a proxy? nginx (or possibly squid) seems potentially appealing to run on the tomcat server and hit on localhost as we might want to add additional DRM servers to use under load balance anyway. Could this take 100 incoming connections from tomcat, accept them all and proxy over the the IIS server in a more respectful manner? Any other angles? EDIT - looking over mod_proxy for apache, which we are already using for conventional use on an apache instance in front of this tomcat instance, might be ideal. I can set a max value on the proxy_pass to only allow 25 connections, and keep them alive permanently. Is that my answer? Many thanks, Chris

    Read the article

  • DHCP Relay setup in ubuntu server

    - by jerichorivera
    I have a network appliance (QNO) that works as traffic load balancer and dhcp server. I would like to add a linux server in between the network appliance and the client computers. The linux server will be used to monitor bandwidth usage. My problem is I still want DHCP to be served by the network appliance so that load balancing will still work efficiently. We are afraid that if we setup the linux server as the DHCP server the network appliance will not be able to load balance the traffic if it only sees the linux server as a single client connecting to it. I've been searching all over for a tutorial on how to setup DHCP relay but have not found any. How do I setup DHCP relay on my linux server given there are two NICs attached to it, one connects the linux server to the network appliance and the other connects the linux server to the client computers. EDIT Router (DHCP) ---- [eth0] Linux Server (Relay agent) [eth1] ----- PC (network) Router IP is 192.168.0.100 eth0 is on DHCP eth1 is static 192.168.2.11 (if I need to change this I can) Tried to do dhcrelay -i eth1 192.168.0.100, but the PC was not getting any DHCP lease from the DHCP router. I might be missing something here.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >