Search Results

Search found 504 results on 21 pages for 'failover'.

Page 7/21 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • Oracle on windows cluster with online/offline IPs

    - by yzador
    I have a windows cluster (on windows 2008 server) with nodes in different subnets. So cluster has two IPs, one for each node (I'm talking not about node IP, but about cluster IP). One is online, the other is offline. Is it possible to run Oracle Fail Safe on this configuration? I've tried to install it, but it gives me the following error when trying to verify group or add database to group: FS-10220: Network name maps to IP address in the cluster resource but maps to IP address on the system

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Link bonding across multiple switches?

    - by Bryan Agee
    I've read up a little bit on bonding nics with ifenslave; what I'm having trouble understanding is whether there is special configuration needed in order to split the bonds across two switches. For example, if I have several servers that all have two nics each, and two separate switches, do I just configure the bonds and plug 1 nic from each into switch #1 and the other from each into switch #2? or is there more to it than that? If the bonds are active-backup, will a nic failure on single machine mean that server may become disconnected since the rest of the machines are using the primary nic and it's using the secondary? Or do you link the switches with one cable as well?

    Read the article

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • Active node stops resources when pasive node is shutdown

    - by Wakaru44
    2 nodes, active/pasive. 2 resources, a virtual ip, openLdap, and the nfs mount where openldap saves the data. When both nodes are up, things worked fine. You could move resources away and put the active in stanby. But when i rebooted the passive node, ( with the resources in the active node), and the passive node loses conectivity, all the resources in the active where stopped by pacemaker. I'm reading the documentation right now, but I just need a little quick tip to figure what could be hapenning here. Im using: corosync pacemaker RHEL 6

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • How to achieve redundancy across data centers?

    - by BrandonBT
    I have a LAMP server with a lot of hardware redundancy built in. I am not worried about the server becoming unavailable. What I am worried about, however, are potential network issues in the data center the server is in. What I would like to have is another server in another data center for redundancy. Load balancing is less of a concern. With that said, I am relatively clueless on two points: How to have two servers in two geographically separate data centers that have exactly the same data, in terms of both files and MySQL databases. How to ensure that all traffic coming into one data center are automatically transferred to the other database in the case of a network or server failure at the first data center. Any guidance on how to accomplish the above two problems would be greatly appreciated.

    Read the article

  • Many ISP's is block port 25, how do I choose an alternative port?

    - by Xeoncross
    I am building an application that will be acting as a combined MUA/MTA on different networks. However, many of the networks are with ISP's that block port 25 for SMTP. Therefore I would also like to open up a secondary port so that some of the installs can communicate on that if port 25 is closed. How do I choose a second port? I know some people use port 26 or port 2525. What is the correct way to choose a port that won't interfere with existing software?

    Read the article

  • How to set Standby server

    - by lasko
    I have an application that connects to SQL Server 2008. What I want is to make a standby server (this standby server should be a mirror of the primary one). So that when the connection fails, the primary server should automatically switch to standby server without modifying my application. If there is way, please tell me in detail or even if there is third party product. Note that I need to set the connection in my application to one server only.

    Read the article

  • Nagios: Is it possible to have multiple IPs for a host?

    - by Aknosis
    In our office we have dual WAN setup, if our cable connection drops we still get connectivity via our T1. The only issue is that our office network is no longer available on the same IP so all Nagios check go critical because they can't connect. What'd be awesome is if I could have Nagios try IP 1 by default but if for some reason its failing on that IP try IP 2. I doubt this is possible with a default install but I'm wondering if there is any add-ons or some other magic that could make this work?

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • mirror sql server 2008 to AWS instance from our datacenter?

    - by Alex
    We are currenlty running on hosted pos system locally and would like to mirror to AWS. We are new to AWS and would like to know the most cost effective way to do this? We have 2 DB and 2 web servers right now in one cabinet in CA. One tape drive, one firewall, one SNA. We are thinking to replicate our system in AWS (using sql server 2008) and just mirror both systems and use a witness server between them to keep the data in sync? The goal is, if CA datacenter goes down, AWS keeps running. User see no downtime. All data is synced. Is anyone doing something similar? Would this be practical to just use AWS in this fashion? Thanks

    Read the article

  • tomcat 6 - Cluster / BackupManager

    - by Kevin
    Hi, I have a question regarding Clustering (session replication/failover) in tomcat 6 using BackupManager. Reason I chose BackupManager, is because it replicates the session to only one other server. I am going to run through the example below to try and explain my question. I have 6 nodes setup in a tomcat 6 cluster with BackupManager. The front end is one Apache server using mod_jk with sticky session enabled Each node has 1 session each. node1 has a session from client1 node2 has a session from client2 .. .. Now lets say node1 goes down ; assuming node2 is the backup, node2 now has two sessions (for client2 and client1) The next time client1 makes a request, what exactly happens ? Does Apache "know" that node1 is down and does it send the request directly to node2 ? =OR= does it try each of the 6 instances and find out the hard way who the backup is ?

    Read the article

  • What are the ways to build a failover cluster?

    - by light
    I have a task where I need to build a failover cluster in two cases: first with servers on Red Hat Enterprise 5.1 and second with SUSE Linux Enterprise 11 SP1. Both cases have SAN. I know there are many ways to build failover cluster, but I can’t find out more, so I need next: The ways to build it? I know only virtualization. Any good book or resource to broad my mind? I’ll be glad to hear any suggestion. Thanks! EDIT #1: failover of servers with bussiness application on it. EDIT #2: will be great to hear summary about solutions with SLES servers? EDIT #3: So if I understand correctly, in my cases the main ways are to use internal solutions or virtualization. So now I have additional questions: Does manufacturer of blades provide some solution? For example HP or IBM. (Without virtualization) Do I need additional server to control "heartbeat" between main and redundant servers? (Virtualization) For example I have several physical servers with VMs. Do I need additional server to control availability of VMs and to move VMs to another physical server in the case their physical server failure? Sorry for my poor English. EDIT #4: Failover of VM or OS on physical server. In both cases will be used SAN , it's not specified, but I think with file system image on it. I started to think that my question is stupid and I need to remake it.

    Read the article

  • How can I set up a fault-tolerant web-service built with Erlang/OTP?

    - by Jonas
    I would like to setup a fault-tolerant web-service. I will build the web-service with Erlang/OTP. At the beginning the web-service will be hosted on a few VPS. Each VPS has its own IP-address, and I can use more if IPs if I need. I would like to have the domain name pointing to a single IP-address. How can setup my Erlang/OTP-application to be fault-tolerant behind a single IP-address? Do I need to use VLAN? Is there a way my Erlang/OTP-application can use heartbeats and handle virtual IP-addresses to route the traffic? or how should I solve this problem?

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • Load balancing using Mina example with Java DSL

    - by Flame_Phoenix
    So, recently I started learning Camel. As part of the process I decided to go through all the examples (listed HERE and available when you DOWNLOAD the package with all the examples and docs) and to see what I could learn. One of the examples, Load Balancing using Mina caught my attention because it uses a Mina in different JVM's and it simulates a load balancer with round robin. I have a few problems with this example. First it uses the Spring DSL, instead of the Java DSL which my project uses and which I find a lot easier to understand now (mainly also because I am used to it). So the first question: is there a version of this example using only the Java DSL instead of the Spring DSL for the routes and the beans? My second questions is code related. The description states, and I quote: Within this demo every ten seconds, a Report object is created from the Camel load balancer server. This object is sent by the Camel load balancer to a MINA server where the object is then serialized. One of the two MINA servers (localhost:9991 and localhost:9992) receives the object and enriches the message by setting the field reply of the Report object. The reply is sent back by the MINA server to the client, which then logs the reply on the console. So, from what I read, I understand that the MINA server 1 (per example) receives a report from the loadbalancer, changes it, and then it sends that report back to some invisible client. Upon checking the code, I see no client java class or XML and when I run, the server simply posts the results on the command line. Where is the client ?? What is this client? In the MINA 1server code presented here: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="service" class="org.apache.camel.example.service.Reporting"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="mina1"> <from uri="mina:tcp://localhost:9991"/> <setHeader headerName="minaServer"> <constant>localhost:9991</constant> </setHeader> <bean ref="service" method="updateReport"/> </route> </camelContext> </beans> I don't understand how the updateReport method magically prints the object on my console. What if I wanted to send message to a third MINA server? How would I do it? (I would have to add a new route, and send it to the URI of the 3rd server correct?) I know most of these questions may sound dumb, but I would appreciate if anyone could help me. A Java DSL version of this would really help me.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >