Hi,
Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique.
The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes.
A load balancer is often suggested as a better choice.
Both claims are not completely true:
When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here.
When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them.
So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down?
Thanks,
Valentino
Edit:
Off course each data center has a local Load Balancer with hot spare.
It's OK to sacrifice session affinity for an instant fail-over.
AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over.
Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers.
Edit 2:
Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete.
20 seconds are possible on networks where the topology was optimized for this.
Probably just CDN operators can grant such fast fail-overs.
Edit 3:*
I did some DNS look-ups and traceroutes (maybe some expert can double check) and:
The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover.
Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records.
From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.