Search Results

Search found 4866 results on 195 pages for 'crm failure'.

Page 68/195 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • Security log overflowing with filtering blocks

    - by Jacob
    I have a Windows 7 workstation whose security log is overflowing with the following errors: Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5157 Filtering Platform Connection "The Windows Filtering Platform has blocked a connection." Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5152 Filtering Platform Packet Drop "The Windows Filtering Platform has blocked a packet." These are not unexpected events; the firewall is expected to drop unsolicited traffic. However, I can't figure out how to tell Windows to stop writing these events to the security log. I've seen this problem before and have been able to find an answer with the use of Google, but I wasn't able to locate on this this time. Thanks!

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • SVN do unnecessary chmod on .svn/tempfiles

    - by ???
    My working dir is on an TrueCrypt NTFS volume, with umask 000. So I can read/write on any files with no problem. But I can't do svn command on it, for example `svn update' shows error: svn: Can't set permissions on '.svn/tempfile.8.tmp': strace svn up gives: ... chmod("sbin/.svn/tempfile.2.tmp", 0770) = -1 EPERM (Operation not permitted) fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 write(3, "( failure ( ( 1 76:Can't set per"..., 172) = 172 fcntl64(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) fcntl64(3, F_SETFL, O_RDWR) = 0 read(3, "( abort-edit ( ) ) ( failure ( ("..., 4096) = 191 gettimeofday({1276661368, 382789}, NULL) = 0 lstat64("sbin", {st_mode=S_IFDIR|0770, st_size=0, ...}) = 0 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) write(2, "svn: Can't set permissions on 's"..., 82svn: Can't set permissions on 'sbin/.svn/tempfile.2.tmp': Operation not permitted) = 82 close(3) = 0 So, the error occurred when svn chmod on some tmp files. But this is disallowed in the TrueCrypt volumes, and it's just unnecessary. Can I bypass the chmod lib calls when launch svn on TrueCrypt volumes?

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • "Account locked out" security event at midnight

    - by Kev
    The last three midnights I've gotten an Event ID 539 in the log...about my own account: Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 539 Date: 2010-04-26 Time: 12:00:20 AM User: NT AUTHORITY\SYSTEM Computer: SERVERNAME Description: Logon Failure: Reason: Account locked out User Name: MyUser Domain: MYDOMAIN Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: SERVERNAME Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: - Source Port: - It's always within a half minute of midnight. There are no login attempts before it. Right after it (in the same second) there's a success audit entry: Logon attempt using explicit credentials: Logged on user: User Name: SERVERNAME$ Domain: MYDOMAIN Logon ID: (0x0,0x3E7) Logon GUID: - User whose credentials were used: Target User Name: MyUser Target Domain: MYDOMAIN Target Logon GUID: - Target Server Name: servername.mydomain.lan Target Server Info: servername.mydomain.lan Caller Process ID: 2724 Source Network Address: - Source Port: - The process ID was the same on all three of them, so I looked it up, and right now at least it maps to TCP/IP Services (Microsoft). I don't believe I changed any policies or anything on Friday. How should I interpret this?

    Read the article

  • Unable to connect to mysql through JDBC connector through Tomcat or externally.

    - by iftrue
    I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally.

    Read the article

  • configure cisco catalyst 3560g with an egress uplink

    - by imaginative
    Currently my setup has our egress uplink connected directly to an external interface on a linux router/firewall/nat gateway. Since the linux box is a single point of failure, I've since setup two openbsd boxes using carp+pf+pfsync in order to gain some additional redundancy. the problem is, I only have one egress uplink (it's still a single point of failure) but need to get it to speak to the active carp node in my openbsd cluster which will server as my new router/firewall/nat cluster. Is there anything specific I need to do on a 3560G in order for me to be able to: 1) Drop the egress uplink into a port 2) Drop one link from the switch to a firewall 2) Drop a second link from a switch to the firewall This is so if one box dies, the other still has the egress link to the switch. Is putting them into one VLAN enough? Anything else that needs to go into the configuration for this setup to work?

    Read the article

  • Getting ROBOCOPY to return a "proper" exit code?

    - by Lasse V. Karlsen
    Is it possible to ask ROBOCOPY to exit with an exit code that indicates success or failure? I am using ROBOCOPY as part of my TeamCity build configurations, and having to add a step to just silence the exit code from ROBOCOPY seems silly to me. Basically, I have added this: EXIT /B 0 to the script that is being run. However, this of course masks any real problems that ROBOCOPY would return. Basically, I would like to have exit codes of 0 for SUCCESS and non-zero for FAILURE instead of the bit-mask that ROBOCOPY returns now. Or, if I can't have that, is there a simple sequence of batch commands that would translate the bit-mask of ROBOCOPY to a similar value?

    Read the article

  • Name resolution works from desktop but not Server

    - by Joe Estes
    Sending mail via smtp.gmail.com is failing on my server. I looked on some forums and people were saying to make sure you can telnet to the smtp address first. When I telnet from my server i input this and get this error: [root@localhost ~]# telnet smtp.gmail.com 465 telnet: smtp.gmail.com: Temporary failure in name resolution smtp.gmail.com: Host name lookup failure From my OS X desktop I do the same and get this: Macintosh-3:~ joe$ telnet smtp.gmail.com 465 Trying 74.125.127.109... Connected to gmail-smtp-msa.l.google.com. I'm running a fedora core 9 server with a firestarter firewall. I have turned off the firewall and the same error persists. I'm also using port forwarding from my router to this server. I have allowed forwarding for port 465 on my router as well. Can someone please help. Thanks, Joe

    Read the article

  • Telnet works from desktop but not Server

    - by Joe Estes
    Sending mail via smtp.gmail.com is failing on my server. I looked on some forums and people were saying to make sure you can telnet to the smtp address first. When I telnet from my server i input this and get this error: [root@localhost ~]# telnet smtp.gmail.com 465 telnet: smtp.gmail.com: Temporary failure in name resolution smtp.gmail.com: Host name lookup failure From my OS X desktop I do the same and get this: Macintosh-3:~ joe$ telnet smtp.gmail.com 465 Trying 74.125.127.109... Connected to gmail-smtp-msa.l.google.com. I'm running a fedora core 9 server with a firestarter firewall. I have turned off the firewall and the same error persists. I'm also using port forwarding from my router to this server. I have allowed forwarding for port 465 on my router as well. Can someone please help. Thanks, Joe

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Unable to connect to mysql through JDBC connector through Tomcat or externally

    - by Stefan Kendall
    I've installed a stock mysql 5.5 installation, and while I can connect to the mysql service via the mysql command, and the service seems to be running, I cannot connect to it through spring+tomcat or from an external jdbc connector. I'm using the following URL: jdbc:mysql://myserver.com:myport/mydb with proper username/password, but I receive the following message: server.com: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. the driver has not received any packets from the server. and tomcat throws: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) Which seems to be the same issue as if I try to connect externally.

    Read the article

  • ZFS: RAIDZ versus stripe with ditto blocks

    - by RandomInsano
    I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops. But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts. Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

    Read the article

  • Calling the LWRP from the Exception Handler

    - by Sarah Haskins
    Is it possible to call out to a Provider (LWRP) from a Chef Exception Handler? I think my Provider is out of scope, but I don't know if what I am trying to do is possible? or advisable? Here is my provider code (cookbooks/config/provider/signal.rb): action :failure do Chef::Log.info("Yeah success") end Here is my exception handler code (exception_handler/handlers/exceptionHandler.rb): require 'chef/handler' config_signal "signal" do action :nothing end class Chef class Handler class LogCollector < Chef::Handler notifies :failure, resources(:config_signal => signal) end end end Also, if anyone has a good recommendation for general reading about scope in the context of Chef I'd appreciate it.

    Read the article

  • Cannot change password for user postgres in postgresql

    - by dhaval
    I have made the following entry in pg_hba.conf local all all trust but still su postgres does not accept blank as password. I am not able to run psql nor pg_ctl for same reason as most of the files are owned by postgres. EDIT1 dhaval@ubuntu:~$ su -c "pg_ctl reload -D template1" Password: su: Authentication failure dhaval@ubuntu:~$ su -c psql Password: su: Authentication failure I am giving the root password above but I guess its expecting "postgres" superuser password. I dont have the same. I need to reset it. EDIt2 dhaval@ubuntu:~$ sudo -i -u postgres [sudo] password for dhaval: postgres@ubuntu:~$ psql Welcome to psql 8.3.7, the PostgreSQL interactive terminal. The above has taken me postgreSQL command prompt. But I am still not sure why the "trust" was not working.

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

  • Disk Utility Restore causes "Could not validate resource - Invalid Argument"

    - by Yahoo
    I have a problem with Disk Utility on Mac OS X 10.6. I have an image of Windows that I would like to use as a bootable volume on a pen drive or external hard drive. The thing is: When I try to restore the volume from the image I get an error: "Restore Failure: Could not validate resource - Invalid Argument" I read some information about that error on the Internet. I converted the image into .iso (Mac OS Extended/ISO (Joliet) Hybrid Image) format and then got this error: "Restore Failure: Could not find any scan information. The source image needs to be imagescanned before it can be restored." When I try to scan the image for Restore, I get the first message. I really read a lot of information about this topic on the Internet, but I haven't found the solution. I tried both ISO and DMG formats; I don't know which is best.

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Can Windows Home Server be used on an active directory domain?

    - by Parvenu74
    The situation: an Active Directory network with a few dozen machines. Most of the machines have the same vanilla image applied to them so if there was a hard drive failure getting the machine back up to the standard network image would be quick and easy. However, there are a handful of (eight) machines which have rather unique setups (accounting, developers, the "artist" with CS4 and such). For these machines we would like to use Windows Home Server since the backups are automatic and recovery from a machine failure is quite painless. The question though is whether or not WHS can be used on an A/D network. If not, what "set it and forget it" backup/imaging product is recommended for this scenario?

    Read the article

  • Raid-3 like software backup tool

    - by Chronial
    I have a lot of data (about 7 TB), stored across multiple hard-drives with varying sizes. I would like to have a backup of that data to be safe against drive failure. A RAID is not a good option for me, as I want to keep my cost low and be able to easily extend the storage capacity of my setup by buying an additional HD. I remember seeing a piece of software that generates parity data over all drives and stores that on an extra drive. That solution protects the setup from hard drive failure and works with varying drive sizes (as long as the parity drive is the biggest one). But I can’t seem to find that software again. Does anybody now what I’m talking about or have any other solution for my situation?

    Read the article

  • difference between success and failed event in auditd/aureport

    - by user112358132134
    The aureport command has two options that limit the list of displayed events to those that were successful and those that failed. Per the man page: --failed Only select failed events for processing in the reports. The default is both success and failed events. --success Only select successful events for processing in the reports. The default is both success and failed events. What does this mean? Is the failure/success with regard to the actual event (e.g., a syscall that returned non-zero) or does the failure/success apply to auditd and whether or not there was an issue in processing the event?

    Read the article

  • SQL 2008 R2 Mirroring Issue

    - by CWL
    Windows 2008 R2 with SQL 2008 R2 - Using Mirroring of a Database across the WAN in a HA setup with one witness. One issue I am having is during a failure (ever so often) the system fails over or tries, but leaves both databases in a Restoring State. My guess is the failover issue happens when there is a WAN bouncing and the systems get confused. The usual fix is to reboot the sql servers. Has anyone seen this type of failure? While this does not happen often it does causes an issue and concern with HA not being trusted fully.

    Read the article

  • How can I boot a vm on Hyper-V 2012 when it has a virtual hard-drive missing?

    - by Zone12
    We have a Hyper-V 2012 server with 8 VM's on. We have attached extra virtual hard-drives to each of the computers to store backups on. These drives are stored on a NAS. After a power failure, we tried to boot the VM's and found that they couldn't be booted without the attached backup drives. We couldn't boot the NAS at that point and so we had to remove all the extra drives manually, boot the VM's and re-attach the drives at a later date when we got the NAS back up and running. These backup drives are non-essential to the running of the system. I would like to know if there is a way to boot a VM on Hyper-V 2012 with some of the hard-drives (scsi) missing so that we can recover automatically from a power failure.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >