Search Results

Search found 8495 results on 340 pages for 'high availability'.

Page 21/340 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Suggestions for a nice&small - high battery life programmer notebook

    - by Emir
    Hi guys, I'm planning to buy a new notebook since the Toshiba I have now sucks in battery life. I'm a web designer & programmer, and the main features of the notebook should be: High battery life Small, 13.3" or 14" screen A fast processor and RAM that won't let me down while multi-tasking Anything other than these is secondary, I wouldn't like it to be very pricey though :) I'm now leaning towards Macbook Pro 13.3", 2.66 Ghz; it has 10 hours of battery life. However, I mostly develop on Windows, so I don't know how many hours of battery will I get if I run Windows 7 on Bootcamp. Your ideas and suggestions will be much appreciated.

    Read the article

  • php-cgi cpu usage is super high

    - by Ryan Thompson
    I am getting constantly high and wildly fluctuating CPU usage % for php-cgi commands as seen via "top" on my Centos server.. I have a server density account and it seems that this is a common trend: User - PID - CPU % - MEM % - VSZ - RSS - TT - Stat - Started - Time - Command 500 - 6389 - 22.4 - 3 - 271136 - 32380 - ? - S - 20:26 - 0:40 - /usr/bin/php-cgi Seems there are about 6 or so of those records in my processes list at any given check-in. Any ideas what's causing this? I have fast_cgi installed and the module is loading.. Not sure why it isn't handling this though. Any help would be greatly appreciated! Ryan

    Read the article

  • HAProxy -- pause/queue all traffic without losing requests

    - by Marc
    I basically have the same problem as mentioned in this thread -- I would like to temporarily suspend all requests to all servers of a certain backend, so that I can upgrade the backend and the database it uses. Since this is a live system, I would like to queue up requests, and send them to the backend servers once they've been upgraded. Since I'm doing a database upgrade with the code change, I have to upgrade all backend servers simultaneously, so I can't just bring one down at a time. I tried using the tcp-request options combined with removing the static healthcheck file as mentioned in that thread, but had no luck. Setting the default "maxconn" value to 0 seems to pause and queue connections as desired, but then there seems to be no way to increase the value back to a positive number without restarting HAProxy, which kills all requests that had been queued up until that point. (The "hot-reconfiguration" options using -sf and -st start a new process, which doesn't seem to do what I want). Is what I'm trying to do possible?

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • Software for failover across multiple external hosts

    - by Lin
    I have multiple webservers with the same content, hosted across different providers. However, I can't seem to find a nice, simple failover solution. Load-balancing software (Pound, HAProxy, etc.) are unnecessary, and I need the flexibility to manage over 100+ domains, so the paid DNS failover solutions I've found are too expensive. So far the simplest solution I've thought of is just to set a very low TTL (30min - 1hr) in each zone entry on my nameservers (running BIND). Then, continuously monitor each server, and temporarily remove failed servers from zone entries. But this seems like something that should be currently available. I only have root access to different VPSes running CentOS. Any suggestions? Thanks!

    Read the article

  • QoS - split bandwidth across all IPs during high load

    - by Matthew Iselin
    We have a Linux-based router which is currently working fairly well, but our network only has a 1.5 mbps incoming connection. The network is small, but during high load periods some systems can end up dominating the bandwidth. For example, a client downloading a file can easily saturate the connection leaving everyone else with barely any access to the outside world. Naturally, I'd like to fix this. I believe a combination of iptables rules and tc is in order, but I have no idea how to go about distributing the bandwidth evenly across the clients. It would be nice if there was a way to divide the bandwidth only across clients that are actually utilising the connection as well, rather than hard limit each connection to (bandwidth / number of clients).

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • MySQL: Pacemaker cannot start the failed master as a new slave?

    - by quanta
    I'm going to setup failover for MySQL replication (1 master and 1 slave) follow this guide: https://github.com/jayjanssen/Percona-Pacemaker-Resource-Agents/blob/master/doc/PRM-setup-guide.rst Here're the output of crm configure show: node serving-6192 \ attributes p_mysql_mysql_master_IP="192.168.6.192" node svr184R-638.localdomain \ attributes p_mysql_mysql_master_IP="192.168.6.38" primitive p_mysql ocf:percona:mysql \ params config="/etc/my.cnf" pid="/var/run/mysqld/mysqld.pid" socket="/var/lib/mysql/mysql.sock" replication_user="repl" replication_passwd="x" test_user="test_user" test_passwd="x" \ op monitor interval="5s" role="Master" OCF_CHECK_LEVEL="1" \ op monitor interval="2s" role="Slave" timeout="30s" OCF_CHECK_LEVEL="1" \ op start interval="0" timeout="120s" \ op stop interval="0" timeout="120s" primitive writer_vip ocf:heartbeat:IPaddr2 \ params ip="192.168.6.8" cidr_netmask="32" \ op monitor interval="10s" \ meta is-managed="true" ms ms_MySQL p_mysql \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally-unique="false" target-role="Master" is-managed="true" colocation writer_vip_on_master inf: writer_vip ms_MySQL:Master order ms_MySQL_promote_before_vip inf: ms_MySQL:promote writer_vip:start property $id="cib-bootstrap-options" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1341801689" property $id="mysql_replication" \ p_mysql_REPL_INFO="192.168.6.192|mysql-bin.000006|338" crm_mon: Last updated: Mon Jul 9 10:30:01 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ serving-6192 ] Slaves: [ svr184R-638.localdomain ] writer_vip (ocf::heartbeat:IPaddr2): Started serving-6192 Editing /etc/my.cnf on the serving-6192 of wrong syntax to test failover and it's working fine: svr184R-638.localdomain being promoted to become the master writer_vip switch to svr184R-638.localdomain Current state: Last updated: Mon Jul 9 10:35:57 2012 Stack: openais Current DC: serving-6192 - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ serving-6192 svr184R-638.localdomain ] Master/Slave Set: ms_MySQL Masters: [ svr184R-638.localdomain ] Stopped: [ p_mysql:0 ] writer_vip (ocf::heartbeat:IPaddr2): Started svr184R-638.localdomain Failed actions: p_mysql:0_monitor_5000 (node=serving-6192, call=15, rc=7, status=complete): not running p_mysql:0_demote_0 (node=serving-6192, call=22, rc=7, status=complete): not running p_mysql:0_start_0 (node=serving-6192, call=26, rc=-2, status=Timed Out): unknown exec error Remove the wrong syntax from /etc/my.cnf on serving-6192, and restart corosync, what I would like to see is serving-6192 was started as a new slave but it doesn't: Failed actions: p_mysql:0_start_0 (node=serving-6192, call=4, rc=1, status=complete): unknown error Here're snippet of the logs which I'm suspecting: Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: rsc:p_mysql:0:4: start Jul 09 10:46:32 serving-6192 lrmd: [7321]: info: RA output: (p_mysql:0:start:stderr) Error performing operation: The object/attribute does not exist Jul 09 10:46:32 serving-6192 crm_attribute: [7420]: info: Invoked: /usr/sbin/crm_attribute -N serving-6192 -l reboot --name readable -v 0 The full logs: http://fpaste.org/AyOZ/ The strange thing is I can starting it manually: export OCF_ROOT=/usr/lib/ocf export OCF_RESKEY_config="/etc/my.cnf" export OCF_RESKEY_pid="/var/run/mysqld/mysqld.pid" export OCF_RESKEY_socket="/var/lib/mysql/mysql.sock" export OCF_RESKEY_replication_user="repl" export OCF_RESKEY_replication_passwd="x" export OCF_RESKEY_test_user="test_user" export OCF_RESKEY_test_passwd="x" sh -x /usr/lib/ocf/resource.d/percona/mysql start: http://fpaste.org/RVGh/ Did I make something wrong?

    Read the article

  • What's the difference between keepalived and corosync, others?

    - by Matt
    I'm building a failover firewall for a server cluster and started looking at the various options. I'm more familiar with carp on freebsd, but need to use linux for this project. Searching google has produced several different projects, but no clear information about features they provide . CARP gave virtual interfaces that failover, I am not really clear on whether that's what corosync does, or is that what pacemaker does? On the other hand I did get manage to get keepalived working. However, I noted that corosync provides native support for infiniband. This would be useful for me. Perhaps someone could shed some light on the differences between: corosync keepalive pacemaker heartbeat Which product would be the best fit for router failover?

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • Performance issue when configuring non HA VM in cluster

    - by laiys
    Hi, I saw this article http://technet.microsoft.com/en-us/library/cc764243.aspx Quote taken from the link “ Important It is recommended that you not deploy virtual machines that are not highly available on your host clusters. Although you can do this by using Hyper-V (VMM does not allow it), the non-highly available virtual machines will consume resources that otherwise would be available to the HAVMs What kind of resources (CPU,memory, NIC, etc) that non HA VM will consume? Just curious as not all VM (in production) not to be in Failover Cluster and Live Migration. If i put the VM into CSV but did not make it as HA, what impact does it make since i allocate same vCPU, vNic and VMemory into the VM. (not to mention that i lost failover feature). Curious to understand more about this. Please advise. Thanks

    Read the article

  • All of ESX4 hosts not responding after rebooting vCenter server

    - by hojyokinmo
    Hello I'm constructing 5ESX+a vCenter serv. I'm testing FT on the system. Today vCenter OS (W2008) demanded system reboot. After rebooting vCenter Serv. All of ESX hosts have alert icon. end indicate "not responding" I tried to connect ESX (version 4) hosts to vCenter again. Once they came back normally, but after few seconds ,They turned red again. All of them have ping connection to vCenter serv. There are no red messages in Tusks and events. Two messages are found in Summary of Cluster ·There aren't sufficient resource for HA failover. ·Can't contact primary HA agent. I'm also not able to reset FT configuration because all of VMS aren't accessed from vCenter What should I do?

    Read the article

  • How to handle server failure in an n-tier architecture?

    - by andy
    Imagine I have an n-tier architecture in an auto-scaled cloud environment with say: a load balancer in a failover pair reverse proxy tier web app tier db tier Each tier needs to connect to the instances in the tier below. What are the standard ways of connecting tiers to make them resilient to failure of nodes in each tier? i.e. how does each tier get the IP addresses of each node in the tier below? For example if all reverse proxies should route traffic to all web app nodes, how could they be set up so that they don't send traffic to dead web app nodes, and so that when new web app nodes are brought online they can send traffic to it? I could run an agent that would update all the configs to all the nodes, but it seems inefficient. I could put an LB pair between each tier, so the tier above only needs to connect to the load balancers, but how do I handle the problem of the LBs dying? This just seems to shunt the problem of tier A needing to know the IPs of all nodes in tier B, to all nodes in tier A needing to know the IPs of all LBs between tiers A and B. For some applications, they can implement retry logic if they contact a node in the tier below that doesn't respond, but is there any way that some middleware could direct traffic to only live nodes in the following tier? If I was hosting on AWS I could use an ELB between tiers, but I want to know how I could achieve the same functionality myself. I've read (briefly) about heartbeat and keepalived - are these relevant here? What are the virtual IPs they talk about and how are they managed? Are there still single points of failure using them?

    Read the article

  • Amazon EC2 instance was not available for few minutes (amazon showed that everything ok)

    - by Salvador Dali
    Few minutes ago my amazon Ec2 instance was unavailable for a few minutes. During this time neither I was able to connect to web-site with http, nor I was able to ssh to it. Also I was not able to connect to my amazon management console for some time (less than amount of unavailability of my instance). When I was able to connect to management console, it was showing me that everything is running smoothly (but I still was not able to connect to instance in any way for a minute or two). During this time I have checked their status page just to see that there is no issues (my instance is in Ireland and there is nothing wrong there today). After that I was able to log in. I checked my logins with last to see that no one except me was logging in. I also looked in apache logs and there was no errors or warnings during this time. Right now when I see my amazon monitor, I see a small spike in CPU in last 15 minutes (but this is from 10% to like 20%) I have no idea what can it be (I have never experienced anything like this before) and therefore I have no idea how scared should I be or what else should I look for. Can anyone give me a hint what my actions should be in such situation?

    Read the article

  • Whats the best way to setup an IIS7 Webfarm for an ASP.NET Application

    - by sontek
    We are looking to setup an IIS7 WebFarm... We have 2 IIS7/Windows Server 2008 boxes that will act as the load balanced webservers. How do you setup IIS/Windows Server 2008 to handle balancing the requests between the 2 servers? What is the best way to sync deployments so we only have to deploy to 1 place. Can we just have them sync their structures or do we have to use a NAS/Network Share?

    Read the article

  • X-Window Modeline for high resolution modes on NEC LCD 22WV monitor

    - by Jakub Narebski
    What should look like X Window System (X.Org) modeline to be put in xorg.conf, for high resolution (incuding recommended 1680x1050 @ 50Hz resolution) for 22" NEC LCD 22WV monitor? X.Org autodetect correctly only 800x600 and lower resolution SVGA modes, unfortunately. How can I generate proper "Modeline" line for xorg.conf? Is information included in NEC_Datasheet_LCD22WV-english.pdf enough (found on NEC LCD 22WV product info page)? What tools there are available to generate proper modeline for a LCD monitor for Linux? MS Windows (MS Windows XP Home) correctly detects and use 1680x1050 resolution; can I somehow get modeline information from MS Windows?

    Read the article

  • Configuring HAProxy with memcache with failover

    - by Lawrie Matthews
    I'm configuring a new set of servers for an existing Wordpress site, and it's been requested that memcache be available and made more resilient. The idea proposed is to have HAProxy send requests to one of the two servers; if that memcache instance is inaccessible, then it should switch to the second, but should not switch back to the first if it comes back up unless the second is then unavailable. This doesn't appear to be a particularly common use case and I've not found much along these lines except to possibly set up the first node with an enormous rise value, such as: server server1 10.112.58.16:11211 check inter 5s fall 3 rise 99999999 server server2 10.112.58.19:11211 check backup which falls over as expected when server1 is unavailable. It won't ever fall back to server1, though, even if server2 goes offline. Can this be made to work?

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendati

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Diagnosing high CPU waiting

    - by Will
    I have a monitoring server that is running icinga/collectd/graphite with about 50 hosts. I have noticed high load/slugging performance on the box. If you take a look at top, you'll see: Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 7.6%id, 23.4%wa, 0.0%hi, 0.2%si, 0.0%st Notice the HUGE %wa value, which as far as I know means a network or disk bottleneck. ifconfig shows no dropping packets and there's not a ton of bandwidth going on, so that leaves disk issues, right? There's not a lot of disk writing going on either...iotop is reporting we're only writing a little over 1 MB per second and the RAID tool reports everything is A-OK and write caching is enabled. How do I go about trying to figure out how to fix this? UPDATE: iostat -x output is: avg-cpu: %user %nice %system %iowait %steal %idle 0.62 0.10 0.31 9.65 0.00 89.31 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.21 33.34 83.55 16.54 1599.94 399.07 19.97 43.21 416.98 3.71 37.13

    Read the article

  • RabbitMQ - How I do configure servers for zero-downtime upgrades?

    - by Terence Johnson
    Having read through the docs and RabbitMQ in Action, creating a RabbitMQ cluster seems straightforward enough, but upgrading or patching an existing RabbitMQ cluster seems to require the whole cluster to be restarted. Is there a way to combine clustering, shovel, federation, and load balancing to make a rolling upgrade possible without losing queues or messages, or have I missed something slightly more obvious?

    Read the article

  • unable to destroy windows 2008 r2 failover cluster after SAN rebuild

    - by Zack
    I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource. Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server. Can someone advise on how to destroy this cluster so I can start over?

    Read the article

  • CentOS 7: PHP high CPU usage

    - by HTF
    I've migrated Observium monitoring platform from CentOS 6.5 to CentOS 7 and I've noticed high CPU usage mostly caused by PHP, the CPU load increase when pooling script is running (poller-wrapper.py). Both VMs are running on the same physical host (KVM hypervisor) with exactly the same spec. I also tested this with a simple PHP benchmark script and CentOS 7 is slower - is it strictly related to PHP version (5.4.30 vs 5.4.16)? CentOS 6.5 [root@centos6:~]# php -f bench.php -------------------------------------- | PHP BENCHMARK SCRIPT | -------------------------------------- Start : 2014-08-19 22:26:34 PHP version : 5.4.30 Platform : Linux -------------------------------------- test_math : 1.610 sec. test_stringmanipulation : 1.416 sec. test_loops : 0.822 sec. test_ifelse : 0.729 sec. -------------------------------------- Total time: : 4.577 sec. CentOS 7 [root@centos7:~]# php -f bench.php -------------------------------------- | PHP BENCHMARK SCRIPT | -------------------------------------- Start : 2014-08-19 22:27:58 PHP version : 5.4.16 Platform : Linux -------------------------------------- test_math : 2.117 sec. test_stringmanipulation : 1.246 sec. test_loops : 1.174 sec. test_ifelse : 0.752 sec. -------------------------------------- Total time: : 5.289 sec. CPU usage increased right after migration:

    Read the article

  • How to setup heartbeat for IP fail over on SSH failure

    - by Tony
    I wonder if anyone can help me, I am trying to setup heartbeat on a redhat 5 to failover an IP address when ssh stops responding on a server. So basically you ssh to a VIP and then get put through which ever server has the floating ip. 192.168.0.100 | | /------------------------\ | /------------------------\ | Server 01 | | | Server 02 | | eth0 - 192.168.0.1 |-----/ | eth0 - 192.168.0.2 | | eth0:0 - 192.168.0.100 | | eth0:0 - down | \------------------------/ \------------------------/ if ssh stops responding i want eth0:0 to be brought up on the second machine to allow ssh connections to carry on being served. I have tried to follow some documents I have found online so here is my current configuration: ha.cf bcast eth0 keepalive 2 warntime 10 deadtime 30 initdead 120 udpport 694 auto_failback off node vm-bal01 node vm-bal02 debugfile /var/log/ha-debug logfile /var/log/ha-log authkeys auth 1 1 sha1 sshhhsecret1234 haresources server01 192.168.0.100/24/eth0:0/192.168.0.255 Hope someone can help as this is driving me nuts...

    Read the article

  • reaching 99.9999% uptime

    - by user35204
    I am currently developing a project which is mission-critical. The actual domain name is registered with 1 & 1 and I plan on purchasing DynDNS Custom DNS service (which has 5 different geographical locations for DNS) and then another secondary DNS service to make sure my DNS is as failover safe as possible. Does it matter that the registration is with 1 & 1 - are they a weak link in the chain? All I really use them for is to say that DynDNS is my primary DNS nameserver and then my secondary DNS is my other nameserver. I can transfer the registration to DynDNS - Im just not sure if it really matters or not. Thanks

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >