Search Results

Search found 33162 results on 1327 pages for 'static ip address'.

Page 203/1327 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • Pointers and Addresses in C

    - by Mohit
    #include "stdio.h" main() { int i=3,*x; float j=1.5,*y; char k='c',*z; x=&i; y=&j; z=&k; printf("\nAddress of x= %u",x); printf("\nAddress of y= %u",y); printf("\nAddress of z= %u",z); x++; y++;y++;y++;y++; z++; printf("\nNew Address of x= %u",x); printf("\nNew Address of y= %u",y); printf("\nNew Address of z= %u",z); printf("\nNew Value of i= %d",i); printf("\nNew Value of j= %f",j); printf("\nNew Value of k= %c\n",k); } Output: Address of x= 3219901868 Address of y= 3219901860 Address of z= 3219901875 New Address of x= 3219901872 New Address of y= 3219901876 New Address of z= 3219901876 New Value of i= 3 New Value of j= 1.500000 New Value of k= c The new address of variable y and z are same. How can two variables have same address and et have different values? Note: I used gcc compiler on Ubuntu 9.04

    Read the article

  • need an empty XML while unmarshalling a JAXB annotated class

    - by sswdeveloper
    I have a JAXB annotated class Customer as follows @XmlRootElement(namespace = "http://www.abc.com/customer") public class Customer{ private String name; private Address address; @XmlTransient private HashSet set = new HashSet<String>(); public String getName(){ return name; } @XmlElement(name = "Name", namespace = "http://www.abc.com/customer" ) public void setName(String name){ this.name = name; set.add("name"); } public String getAddress(){ return address; } @XmlElement(name = "Address", namespace = "http://www.abc.com/customer") public void setAddress(Address address){ this.address = address; set.add("address"); } public HashSet getSet(){ return set; } } I need to return an empty XML representing this to the user, so that he may fill the necesary values in the XML and send a request So what I require is : <Customer> <Name></Name> <Address></Address> </Customer> If i simply create an empty object Customer cust = new Customer() ; marshaller.marshall(cust,sw); all I get is the toplevel element since the other fields of the class are unset. What can I do to get such an empty XML? I tried adding the nillable=true annotation to the elements however, this returns me an XML with the xsi:nil="true" which then causes my unmarshaller to ignore this. How do I achieve this?

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • template specialization for static member functions; howto?

    - by Rolle
    I am trying to implement a template function with handles void differently using template specialization. The following code gives me an "Explicit specialization in non-namespace scope" in gcc: template <typename T> static T safeGuiCall(boost::function<T ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; T ret = _f(); return ret; } } // template specialization for functions wit no return value template <> static void safeGuiCall<void>(boost::function<void ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; _f(); } } I have tried moving it out of the class (the class is not templated) and into the namespace but then I get the error "Explicit specialization cannot have a storage class". I have read many discussions about this, but people don't seem to agree how to specialize function templates. Any ideas?

    Read the article

  • Twisted: how-to bind a server to a specified IP address? (solved)

    - by daccle
    I want to have a twisted service (started via twistd) which listens to TCP/POST request on a specified port on a specified IP address. By now I have a twisted application which listens to port 8040 on localhost. It is running fine, but I want it to only listen to a certain IP address, say 10.0.0.78. How-to manage that? This is a snippet of my code: application = service.Application('SMS_Inbound') smsInbound = resource.Resource() smsInbound.putChild('75sms_inbound',ReceiveSMS(application)) smsInboundServer = internet.TCPServer(8001, webserver.Site(smsInbound)) smsInboundServer.setName("SMS Handling") smsInboundServer.setServiceParent(application)

    Read the article

  • Having problem with a crawl service in .net: Server not responding to IP ping. Is it bandwidth or ht

    - by Hamid
    Hi to all I develop web crawling service (windows service / multi-thread) . its work fine, but sometimes my server network not response. and i can't ping server IP (from internet), but can ping by other network card (local ip) that not access to internet. after i open server with remote desktop and stop crawling service. i could ping. What's my problem? Bandwidth limit or max connection limit exceed or ??? how to prevent this issue? Note: when this problem occur, i open browser for browse web site, but can't open any website!!! Could you please help me. Thanks in advanced

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • DHCP forwarding behind access list on a Cisco Catalyst

    - by Ásgeir Bjarnason
    I'm having some trouble with forwarding DHCP from a subnet behind an access list on a Cisco Catalyst 4500 switch. I'm hoping somebody can see the mistake I'm making. The subnet is defined like this: (first three octets of IP addresses and vrf name anonymized) interface Vlan40 ip vrf forwarding vrf_name ip address 10.10.10.126 255.255.255.0 secondary ip address 10.10.10.254 255.255.255.0 ip access-group 100 out ip helper-address 10.10.20.36 no ip redirects I tried turning on a VMWare machine on this subnet that was configured to use DHCP, but I never got a DHCP response and the DHCP server didn't receive a request. I tried putting the following in the access-list: access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootps access-list 100 permit udp host 10.10.10.254 host 10.10.20.36 eq bootpc access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootps access-list 100 permit udp host 10.10.20.36 host 10.10.10.254 eq bootpc That didn't help. Can anybody see what the problem is? I know that the DHCP server works; our whole network is running off of this DHCP server I also know that the subnet works because we have active servers running on the network The DHCP scope is already defined on the DHCP server The subnet is correctly defined on the VMWare server (already servers running on the subnet on VMWare) Edit 2012-10-19: This is solved! The subnet had formerly been defined as a /25 network, but was then expanded into a /24 network. When the DHCP scope was altered after this change it was done incorrectly; the gateway was moved to .254, the leasable IP range was in the lower half of the /24 subnet but we forgot to change the CIDR prefix from /25 into /24. This happened some 2 years ago, and we didn't need to use DHCP on this server network again until this week. Thank you MDMarra and Jason Seemann for looking at the question and trying to troubleshoot. Now I'm wondering if I should mark Jason's answer as the accepted answer (I am new to the Stack Exchange network, so I don't know the etiquette of what to do if I misstated the question like in this case).

    Read the article

  • New router messed up server 2003 setup...

    - by Aceth
    Hey, We were sent a new 2wire router today configured it as best we can to match the old bt voyager. We've also got X static IP's. We've manage to get our webserver on one of the new IP's public facing. then we use a hardware firewall which is in a DMZ again with a different static IP. This firewall then is our gateway for our internal LAN. with a few servers etc. The problem we're having is only our PDC (primary Domain controller which has exchange 2003 on) can't ping externally even an external IP. We've connected laptops to the 2wire router and obtain a private ip 192.168.1.X and it works fine can ping etc. our other servers with an internal ip behind the firewall can ping out fine. We've connected to the firewalls logging console and the pings from the server are allowed through so its fine there. The server in question is a Windows server 2003 R2 Enterprise SP2 + Exchange 2003 Server doesn't have firewall turned on. it has static private IP .. gateway is pointing to the right one External Static IP is routing fine inwards We've ran out of ideas .. help??

    Read the article

  • OpenVPN IPV6 Tunnel Radvd

    - by Arenstar
    Hello.. I have an interesting question regarding ipv6 + openvpn.. My Version is OpenVPN 2.1.1 i have been given a native /64 ipv6 network ( for this example 2001:acb:132:acb::/64 ) The plan was/is, route this block through openvpn and into an office ( for testing purposes ) Soo to explain.. I have a Centos Box as the first linux "router" in a datacenter & a Ubuntu box as the second linux "router" in the office I have created a simple point-to-point tunnel using tun ( based off ipv4 address to start the tunnel ) I have assigned to Centos /sbin/ip addr add fed1::1/128 dev eth0 /sbin/ip addr add fed2::2/128 dev tun0 /sbin/ip route add 2001:acb:132:acb::/64 dev tun0 ## ipv6 Block down the tunnel /sbin/ip route add ::/0 dev eth0 ## Default out to Gateway I have assigned to Ubuntu /sbin/ip addr add fed1::3/128 dev tun0 /sbin/ip addr add fed1::4/128 dev eth0 /sbin/ip route add 2001:acb:132:acb::/64 dev eth0 ## ipv6 Block down to eth0 /sbin/ip route add ::/0 dev tun0 ## Default up the tunnel I have also included on both servers.. sysctl -w net.inet6.ip6.forwarding=1 Looks Good... right??? Wrong.. :( I am not able to ping fed1::1 from fed1::4 (Ubuntu) (can ping :4,:3,:2) However, i can ping fed1::1 fed1::2 from :3 ?????? ( very strange ) I am able to access the internet from any ipv6 interface on the Centos Box but clearly not from the Ubuntu box.. Further, i will eventually run radvd on the Ubuntu box eth0, and autoconf the network with ipv6 address's Anyone with some advice / tips to help me out.. ??? Cheers

    Read the article

  • pfSense router on a LAN with two gateways

    - by JohnCC
    I have a LAN with an ADSL modem/router on it. We have just gained an alternative high-speed internet connection at our location, and I want to connect the LAN to it, eventually dropping the ADSL. I've chosen to use a small PFSense box to connect the LAN to the new WAN connection. Two servers on the LAN run services accessible to the outside via NAT using the single ADSL WAN IP. We have DNS records which point to this IP. I want to do the same via the new connection, using the WAN IP there. That connection permits multiple IPs, so I have configured pfSense using virtual IP's, 1:1 NAT and appropriate firewall rules. When I change the servers' default gateway settings to the pfSense box, I can access the services via the new WAN IPs without a problem. However, I can no longer access them via the old WAN IP. If I set the servers' default gateway back to the ADSL router, then the opposite is true - I can access the services via the ADSL IP, but not via the new one. In the first case, I believe this is because an incoming SYN packet arrives at the ADSL WAN IP, and is NAT'd and sent to the internal IP of the server. The server responds with a SYN/ACK which it sends via its default gateway, the pfSense box. The pfSense box sees a SYN/ACK that it saw no SYN for and drops the packet. Is there any sensible way around this? I would like the services to be accessible via both IPs for a short period at least, since once I change the DNS it will take a while before everyone picks up the new address.

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >