Search Results

Search found 2736 results on 110 pages for 'mod balancer'.

Page 9/110 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Nginx Load Balancer 403 error

    - by user64473
    I am trying to install nginx as a load balancer with apache backends, so that when I point my sites to the nginx server it serves up the content from the apache backend. I have the apache configuration set up correctly on both (i.e when I go to the site on the apache servers it works great) but when I use the nginx load balancer as the site I get 403 error. I have no idea why as it isn't even accessing any files on the server, thusly there aren't any files to be forbidden access to. My virtual host is enabled and looks like this: upstream webs { server 10.0.0.30 weight=1; server 10.0.0.31 weight=1; } server { listen 80; server_name www.example.com example.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://webs; include /etc/nginx/proxy.conf; } } and my nginx.conf looks like this: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; } Can any geniuses out there tell me what I am doing wrong?

    Read the article

  • How benchmark server with load balancer

    - by Fajkowsky
    Hey I have four computers(with linux): two with mediawiki(mirror, both connected to one db) one with mysql one server(DHCP,DNS etc) I configured on my server load balancer and now hen I type in browser name.local for example I get one of my mediawiki servers. I press f5 really fast and then I see in top command both computers are being loaded but not much. I used tool ab (apache benchamrk) but if I run it always is connected to one server never alternately. I use this settings: ab -n 100 -c 10 http://name.local/

    Read the article

  • Grant a user access to directories shared by root (mod: 770)

    - by Paul Dinham
    I want to grant a user (username: paul) access to all directories shared by root with mod 770. I do it this way: groups root (here comes a list of groups in which root user is) usermod -a -G group1 paul usermod -a -G group2 paul usermod -a -G group3 paul ... All the 'group1', 'group2', 'group3' are seen in the group list of root user. However, after adding 'paul' to all groups above, he still can not write to directories shared by root user with mod 770. Did I do it wrongly?

    Read the article

  • Excel: ROUND & MOD giving me strange DATE results

    - by Mike
    This is sort of related to a previous question. My formula, which seemed to work fine yesterday now gives strange results. Today is the 30th of March (30/03/10). It's 10:11am on the clock that the computer is using for the time stamp in the NOW() part of my worksheet. Below is the formula and a screen shot of the results/columns. QUESTION: Why ddoes it show 1/2 day, and also where does 23 1/2 come from? The NOW() is in a hidden column (F2)...which I forgot to unhide before I took the screen shot. =IF(ISBLANK(I2),ROUND(MOD(H2-F2,24),2),ROUND(MOD(I2-F2,24),2)) Thanks Mike

    Read the article

  • Load Balancer Timeout

    - by Anilkumar
    "This website is temporarily unavailable. Please check back later. Unfortunately there were no suitable nodes available to serve this request." When I request a stored procedure from my program (SP is taking 2 minutes to execute) the above error is getting. I believe this is because of Load balancer Time out. How we can increment the load balancer time out.

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • Load balancing and HTTPS strategies

    - by Dan
    I am faced with the following problem: Servers get saturated since current load balancing strategy is based on client IP. Some corporate clients access our servers from behind large proxies so all clients appear with same IP to our load balancer. I think we are using some hardware load balancing device (can investigate further if necessary). We need to maintain session affinity (site is constructed in ASP), so all requests with same IP get routed to the same node. Since all the communication goes over the HTTPS, no request data (like session Id) is available to balancer as a client discriminator. Is there a way to use some other data besides the IP to distinguish between clients and route the clients even when coming from same IP to different nodes? Note: I need to maintain the traffic between the balancer and nodes safe (encrypted).

    Read the article

  • Is it possible to load balance requests from a single source?

    - by Shawn
    In our application, Server A establishes a TCP connection with Server B, then it sends a large amount of requests to Server B over the TCP connection. The request message is XML-based. Server B needs to respond within a very short period, and it takes time to process the requests. So we hope a load balancer can be introduced and we can expedite the processing by using multiple Server B's. This is not a web application. I did some research but failed to find a similar application of load balancer. Can anyone tell me if there's a load balancer can help in our application?

    Read the article

  • Reset Network Load Balancer Connection Pool

    - by bill_the_loser
    I am currently working on load testing a web application on a virtual machine cluster. I am looking for a way to flush the connection pool / NLB cache so that it is like each machine connecting to the NLB is connecting for the first time and doesn't get directed back to the node that it was on last time. This a Windows 2003 Server cluster, behind a Microsoft software based Network Load Balancer. Additional Information: To do the load testing I'm using virtual machines, one for each node on the cluster. Somehow I got two virtual machines connecting to the same node and I'm looking for an easier way to reset those connections without going in to the NLB Manager and stopping and starting each node on the NLB. Update: We went ahead and changed the affinity on all of the nodes of the cluster to none. Now it's a non-issue.

    Read the article

  • Elastic Load Balancer & SSL termination

    - by Aaron Scruggs
    I am setting up a Rails app on AWS that: 1) all traffic must ssl encrypted 2) will highly fluctuate in traffic on a weekly basis 3) will by maintained by someone that is a stronger coder than sysadmin, but will be responsible for both I am thinking that SSL termination on an elastic load balancer backed by small ec2 instances running nginx and unicorn A small subset of the requests will take longer than 10s, because of this I am also debating using 'thin' instead of 'unicorn'. My question is this: Is this sane? I am stepping into a quagmire of cost, maintainability, security or performance problems?

    Read the article

  • AWS Elastic load balancer doesn't decrease instances from Alarm Trigger

    - by jchysk
    I have a load balancer that I created an auto-scaling-group and launch-config for. I created the auto-scaling-group with a min-size of 1 and max size of 20. I have a scaledown policy: as-put-scaling-policy SBMScaleDownPolicy --auto-scaling-group SBMAutoScaleGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300 Then I set up an alarm: mon-put-metric-alarm SBMLowCPUAlarm --comparison-operator LessThanThreshold --evaluation-periods 1 --metric-name CPUUtilization --namespace "AWS/EC2" --period 600 --statistic Average --threshold 35 --alarm-actions arn:aws:autoscaling:us-east-1:policystuffhere:autoScalingGroupName/SBMAutoScaleGroup:policyName/SBMScaleDownPolicy --dimensions "AutoScalingGroupName=SBMAutoScaleGroup" When average CPU usage over 10 minutes is under 35, in CloudFront the alarm shows up as "In Alarm State" but doesn't decrease the number of instances. Also, if there's only one instance running it'll spin up another to 2 even if a scale up alarm isn't hit. It seems like the default value is just set to 2 somehow. How can I change this?

    Read the article

  • How to set RpcClientAccessServer for a Exchange 2010 mailbox database to a load balancer

    - by Archit Baweja
    I have 2 Exchange 2010 servers each with a Mailbox Database. I have also setup a Hardware Load Balancer (KEMP LoadMaster 2200 to be precise) to load balance the CAS role access. My HLB has an IP of 192.168.1.100. I've setup the DNS A record for mail.mydomain.com to point to 192.168.1.100. However when I try to set the RpcClientAccessServer on a mailbox database using Set-MailboxDatabase "My Mailbox Database" -RpcClientAccessServer mail.mydomain.com I get an error saying Exchange server "mail.mydomain.com" was not found. Please make sure you have typed it correctly. + CategoryInfo : NotSpecified: (:) [], ManagementObjectNotFoundException + FullyQualifiedErrorId : 4082394C Any ideas?

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • nginx proxy pass redirect through load balancer

    - by Brian
    I have several backend webservers that are load-balanced using LVS. These machines have only internal non-routable IPs on them. The load-balancer is the only machine with an external IP. This setup works great. I would like to add another webserver for image serving, but it will not be part of the load-balanced pool. Is it possible to proxy pass from the load-balanced web servers to the image server and have the response redirected to the client? Client--external LB--internal web server--internal image server I've gotten proxy pass working when I remove the LB from the equation, but no luck when trying to use it.

    Read the article

  • First request too slow even if I have a load balancer in the back

    - by adrian7
    I have an Apache 2 on Centos + bind with a wordpress website on it (e.g example.com). I have also set up, on another server in a different contry a load balancer (varnish:80 + nginx 127.0.0.1:8080) for it - which task is to server all static content under /wp-content/. Using Simple DNS editor I added an A entry to cdn.example.com pointing to the server's IP. So no extra work from a 2nd dns server. Then using htaccess I redirect all requests to jpg|gif|css|js files to cdn.example.com. That works and all files are saved on the "cdn" server and served right away. My problem is that for the first time I enter on example.com (e.g after restarting the computer or closing the browser) the load time is 1 up to 3 seconds, while any subsequent page loads take only 300 to 600 miliseconds. I know it might be a DNS issue, but I have done a cache check on several websites and cdn.example.com indicates the right IP. Do you have any ideas where I should dig to solve this first-time slowness?

    Read the article

  • Installing Mod-wsgi 3.3 for apache 2.2 and python 3.2

    - by aaronasterling
    I am attempting to install Mod-wsgi 3.3 on an ubuntu 11.10 desktop edition with apache 2.2 and python 3.2 I downloaded the source tarball and extracted it. I configured it using the --with-python=/usr/bin/python3 option to configure. This is the only copy of python3 that I have installed. I then issued the commands make and sudo make install. I attempted to restart apache using sudo /etc/init.d/apache2 restart and get the following error message: apache2: Syntax error on line 203 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/wsgi.load: Cannot load /usr/lib/apache2/modules /mod_wsgi.so into server: /usr/lib/apache2/modules/mod_wsgi.so: undefined symbol: PyCObject_FromVoidPtr Action 'configtest' failed. The Apache error log may have more information. ...fail! The error logs only inform us that it's a segfault: ` I checked to make sure that it's linked against the right python library with ldd mod_wsgi.so and got the output linux-gate.so.1 => (0x00d66000) libpython3.2mu.so.1.0 => /usr/lib/libpython3.2mu.so.1.0 (0x0065b000) libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0x00a20000) libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0x00110000) libssl.so.1.0.0 => /lib/i386-linux-gnu/libssl.so.1.0.0 (0x0028c000) libcrypto.so.1.0.0 => /lib/i386-linux-gnu/libcrypto.so.1.0.0 (0x0044c000) libffi.so.6 => /usr/lib/i386-linux-gnu/libffi.so.6 (0x002d9000) libz.so.1 => /lib/i386-linux-gnu/libz.so.1 (0x00eb3000) libexpat.so.1 => /lib/i386-linux-gnu/libexpat.so.1 (0x00abe000) libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0x002e0000) libutil.so.1 => /lib/i386-linux-gnu/libutil.so.1 (0x00c47000) libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0x00e24000) /lib/ld-linux.so.2 (0x0042c000) It seems to be linking against the python3 library so I'm not sure what the issue is. I have read on another question that mod-python can present problems however it was never installed. I saw that the directive WSGIPythonHome can be used to point to the correct python version and created a directory /usr/bin/apache2-python/ with a link named python and python3(the name I passed to the configure script) to /usr/bin/python3 This results in the same error. So I'm pretty sure it's using the correct version of python. I am now at a loss. Thanks in advance for any help. update Using the version from the repository I get the following log when I attempt to request a page: [Wed Mar 21 13:21:11 2012] [notice] child pid 5567 exit signal Aborted (6) Fatal Python error: Py_Initialize: Unable to get the locale encoding LookupError: no codec search functions registered: can't find encoding [Wed Mar 21 13:21:13 2012] [notice] child pid 5568 exit signal Aborted (6) Fatal Python error: Py_Initialize: Unable to get the locale encoding LookupError: no codec search functions registered: can't find encoding [Wed Mar 21 13:21:14 2012] [notice] caught SIGTERM, shutting down If I comment out the instruction to load mod-wsgi, the page serves normally.

    Read the article

  • OpenLDAP mirror mode replication failing with TLS behind a load balancer

    - by Lynn Owens
    I have two OpenLDAP servers that are both running TLS. They are: ldap1.mydomain.com ldap2.mydomain.com I also have a load balancer cluster with a dns name of it's own: ldap.mydomain.com The SSL certificate has a CN of ldap.mydomain.com, with SANs of ldap1.mydomain.com and ldap2.mydomain.com. Everything works... Except mirror mode replication. My mirror mode replication is setup like this: ldap.conf TLS_REQCERT allow cn=config.ldif olcServerID: 1 ldap://ldap1.mydomain.com olcServerID: 2 ldap://ldap2.mydomain.com On ldap1, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap2.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" On ldap2, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap1.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" Here's the errors I'm getting in syslog: Dec 1 21:05:01 ldap1 slapd[6800]: slap_client_connect: URI=ldap://ldap2.mydomain.com DN="cn=me,dc=mydomain,dc=com" ldap_sasl_bind_s failed (-1) Dec 1 21:05:01 ldap1 slapd[6800]: do_syncrepl: rid=001 rc -1 retrying Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 ACCEPT from IP=ldap.mydomain.com:2295 (IP=ldap1.mydomain.com:636) Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 closed (TLS negotiation failure) Any ideas? I've been working on OpenLdap for way too long now.

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • Apache and multiple tomcats proxy

    - by Sebb77
    I have 1 apache server and two tomcat servers with two different applications. I want to use the apache as a proxy so that the user can access the application from the same url using different paths. e.g.: localhost/app1 --> localhost:8080/app1 localhost/app2 --> localhost:8181/app2 I tried all 3 mod proxy of apache (mod_jk, mod_proxy_http and mod_proxy_ajp) but the first application is working, whilst the second is not accessible. This is the apache configuration I'm using: ProxyPassMatch ^(/.*\.gif)$ ! ProxyPassMatch ^(/.*\.css)$ ! ProxyPassMatch ^(/.*\.png)$ ! ProxyPassMatch ^(/.*\.js)$ ! ProxyPassMatch ^(/.*\.jpeg)$ ! ProxyPassMatch ^(/.*\.jpg)$ ! ProxyRequests Off ProxyPass /app1 ajp://localhost:8009/ ProxyPassReverse /app1 ajp://localhost:8009/ ProxyPass /app2 ajp://localhost:8909/ ProxyPassReverse /app2 ajp://localhost:8909/ With the above, I manage to view the tomcat root application using localhost/app1, but I get "Service Temporarily Unavailable" (apache error) when accessing app2. I need to keep the tomcat servers separate because I need to restart one of the applications often and it is not an option to save both apps on the same tomcat. Can someone point me out what I'm doing wrong? Thank you all.

    Read the article

  • Why do apache2 upgrades remove and not re-install libapache2-mod-php5?

    - by nutznboltz
    We repeatedly see that when an apache2 update arrives and is installed it causes the libapache2-mod-php5 package to be removed and does not subsequently re-install it automatically. We must subsequently re-install the libapache2-mod-php5 manually in order to restore functionality to our web server. Please see the following github gist, it is a contiguous section of our server's dpkg.log showing the November 14, 2011 update to apache2: https://gist.github.com/1368361 it includes 2011-11-14 11:22:18 remove libapache2-mod-php5 5.3.2-1ubuntu4.10 5.3.2-1ubuntu4.10 Is this a known issue? Do other people see this too? I could not find any launchpad bug reports about it. Platform details: $ lsb_release -ds Ubuntu 10.04.3 LTS $ uname -srvm Linux 2.6.38-12-virtual #51~lucid1-Ubuntu SMP Thu Sep 29 20:27:50 UTC 2011 x86_64 $ dpkg -l | awk '/ii.*apache/ {print $2 " " $3 }' apache2 2.2.14-5ubuntu8.7 apache2-mpm-prefork 2.2.14-5ubuntu8.7 apache2-utils 2.2.14-5ubuntu8.7 apache2.2-bin 2.2.14-5ubuntu8.7 apache2.2-common 2.2.14-5ubuntu8.7 libapache2-mod-authnz-external 3.2.4-2+squeeze1build0.10.04.1 libapache2-mod-php5 5.3.2-1ubuntu4.10 Thanks At a high-level the update process looks like: package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end But at a lower level def install_package(name, version) run_command_with_systems_locale( :command = "apt-get -q -y#{expand_options(@new_resource.options)} install #{name}=#{version}", :environment = { "DEBIAN_FRONTEND" = "noninteractive" } ) end def upgrade_package(name, version) install_package(name, version) end So Chef is using "install" to do "update". This sort of moves the question around to "how does apt-get safe-upgrade" remember to re-install libapache-mod-php5? The exact sequence of packages that triggered this was: apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common But the code is attempting to run checks to make sure the packages in that list are installed already before attempting to "upgrade" them. case node[:platform] when 'debian', 'centos', 'fedora', 'redhat', 'scientific', 'ubuntu' # first primitive way is to define the updates in the recipe # data bags will be used later %w/ apache2 apache2-mpm-prefork apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common /.each{ |package_name| Chef::Log.debug("is #{package_name} among local packages available for changes?") next unless node[:packages][:changes].keys.include?(package_name) Chef::Log.debug("is #{package_name} available for upgrade?") next unless node[:packages][:changes][package_name][:action] == 'upgrade' package package_name do action :upgrade case node[:platform] when 'centos', 'redhat', 'scientific' options '--disableplugin=fastestmirror' when 'ubuntu' options '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold"' end end tag('upgraded') } # after upgrading everything, run yum cache updater if tagged?('upgraded') # Remove old orphaned dependencies and kernel images and kernel headers etc. # Remove cached deb files. case node[:platform] when 'ubuntu' execute 'apt-get -y autoremove' execute 'apt-get clean' # Re-check what updates are available soon. when 'centos', 'fedora', 'redhat', 'scientific' node[:packages][:last_time_we_looked_at_yum] = 0 end untag('upgraded') end end But it's clear that it fails since the dpkg.log has 2011-11-14 11:22:25 install apache2-mpm-worker 2.2.14-5ubuntu8.7 on a system which does not currently have apache2-mpm-worker. I will have to discuss this with the author, thanks again.

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • AWS Load Balancer with a static IP address

    - by user965904
    I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer. It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist. So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them. Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers? If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static? Any help/advise is greatly appreciated guys.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >