Search Results

Search found 28222 results on 1129 pages for 'machine config'.

Page 417/1129 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Windows 2008 R2 Servers Sending Arp Requests for IPs outside Subnet

    - by Kyle Brandt
    By running a packet capture on my my routers I see some of my servers sending ARP requests for IPs that exist outside of its network. For example if my network is: Network: 8.8.8.0/24 Gateway: 8.8.8.1 (MAC: 00:21:9b:aa:aa:aa) Example Server: 8.8.8.20 (MAC: 00:21:9b:bb:bb:bb) By running a capture on the interface that has 8.8.8.1 I see requests like: Sender Mac: 00:21:9b:bb:bb:bb Sender IP: 8.8.8.20 Target MAC: 00:21:9b:aa:aa:aa Target IP: 69.63.181.58 Anyone seen this behavior before? My understanding of ARP is that requests should only go out for IPs within the subnet... Am I confused in my understanding of ARP? If I am not confused, anyone seen this behavior? Also, these seem to happen in bursts and it doesn't happen when I do something like ping an IP outside of the network. Update: In response to Ian's questions. I am not running anything like Hyper-V. I have multiple interfaces but only one is active (Using BACS failover teaming). The subnet mask is 255.255.255.0 (Even if it were something different it wouldn't explain an IP like 69.63.181.58). When I run MS Network Monitor or wireshark I do not see these ARP requests. What happens is that on the router capturing I see a burst of about 10 requests for IPs outside of the network from the host machine. On the machine itself using wireshark or NetMon I see a flood of ARP responses for all the machines on the network. However, I don't see any requests in the capture asking for those responses. So it seems like maybe it is maybe refreshing the arp cache but including IPs that outside of the network. Also when it does this NetMon doesn't show the ARP requests?

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • File upload permission problem IIS 7

    - by krish
    I am unable to upload files to website hosted under IIS7. I have already given write permissions to "IUSR_websitename" and set the property in web.config also. I am able to upload files with out log in to application at the time of user registration. But once log in to application, if I upload files, it is giving "Access denied" error. Please help me.

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

  • SQL Server Backup modes, and a huge log file

    - by Matt Dawdy
    Okay, I'm not a server administrator, a network guy, or a DBA. I'm merely a programmer helping out a small company. They have IT guy who isn't MS centric (most stuff is on Mac) and he and I are trying to figure out a solution here. We've got 1 main database. We run nightly full backups. I know they are full backups because I can take the latest file, or any of the daily backups, and go to a completely new machine and "restore" the backup to an empty database and our app runs perfectly fine off of this backup. The backups have grown from 60 MB to 250MB over 4 months. When running, then log file is 1.7 GB, and the data file is only 200-300 MB. Yes, recovery model is set to full. So, my question, after all of that, if we are keeping daily backups, and we don't have the need / aren't smart enough to roll the DB back to a certain time, if I change the recovery mode to simple, am I really losing anything? And, if I do change it to simple, will it completely dump the log file or at least reduce it way the hell down? And, will that make our database run faster? I know that it'll make my life easier when I copy a relatively recent backup to my local machine to do development and testing...

    Read the article

  • How to diagnose Internal Server error on Lighttpd?

    - by Tomaszs
    I have Lighttpd on Centos 5 with Fcgi and Memcached. Periodically, once per week or two i get internal server error 500 and i must manually restart lighttpd to get it to work again. In my lighttpd config I've defined error log file: server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" But when I open it, it has no rows for last days, only one month ago. So my question is how to diagnose what is the issue and how to enable error log for my configuration?

    Read the article

  • iphone app to read text files.

    - by bandito40
    Hi, Need to edit some of the local text files on my iphone but so far all the apps I have downloaded do not navigate the OS3 file tree for me to load and edit them. I need to do this on my iphone as I can no longer access via ssh or with the iphone cable. One of the files to edit is a ssh config file which is what is not allowing ssh connections. Any ideas on apps or other methods that I could use. Thanks,

    Read the article

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Deploying Windows Service through group policy fails with Event ID 102

    - by Sören Kuklau
    I'm trying to deploy a custom Windows Service (written in C#; installed through a VS setup project) using a group policy. To help debug this, I also have two additional MSIs in the same policy. All three packages are deployed as a machine policy, not a user one. On one machine (runs Windows Server 2008; no UAC), all three deploy fine. The service is set to Automatic, as expected. On two machines (run Windows 7; UAC), the two other MSIs deploy fine, but my service fails to install. The event log gives an event ID of 102, which appears to be a permissions problem: The install of application "Package Name" from policy "Policy Name" failed. The error was The installation source for this product is not available. Verify that the source exists and that you can access it. However, all three packages come from the same share linked through UNC, so this is unlikely. My guess is that UAC is the problem; that the service requires additional permissions. Do I need to alter the MSI somehow?

    Read the article

  • Win7 Credential manager and accessing SQL Server from outside of the domain

    - by David Lively
    My SQL Server is set to use windows authentication. If I am connected to the domain directly from my Win7 Ultimate x64 machine, SQL Management Studio (SSMS) will let me authenticate with Windows authentication. However, if I am connected via the VPN (from a different machine that is not joined to the domain), it won't. If I start SSMS with the following command line: C:\Windows\system32>runas /netonly /user:domainname\username "C:\Program Files (x86)\Microsoft SQL...\ssms.exe" then connecting to the SQL Server (which is in the domain) with Windows Authentication works fine. I'd like to save these credentials so that I don't have to launch SSMS from the command line, or modify the shortcut. I know I can use the SysInternals ShellRunAs extension to do this, but I again have to enter my domain username and password each time, and shift+right-click to see that menu option. The Windows Credential Manager seems designed to solve this problem, and works for network shares. However, it doesn't seem to work for SSMS. Any suggestions? I've tried using the /savecred option with runas to create the necessary credentials, but that appears to be incompatible with the /netonly option. Running the above command line with the addition of /savecred just displays the runas help screen. Grrr. Argh.

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • Can't connect to windows via ssh

    - by Micah
    I downloaded cygwin and ran ssh-host-config. I'm trying to connect using ssh -l micah myserver it then says micah@myserver's password: I enter the same password I use to log into windows and it says Permission Denied, please try again. After the third try it says: Permission denied (publickey,password,keyboard-interactive). What am I doing wrong? Any ideas? Do I need to generate an ssh key on the client and add it somewhere on the server?

    Read the article

  • Anti-virus for Ubuntu Hardy 8.04

    - by April
    I am using Ubuntu hardy with Scalr and AWS, the Ubuntu instance does not come with any antivirus software. Can anyone recommend a good ant-virus software for Ubuntu? I would also need installation and config steps. Thanks.

    Read the article

  • How can I use fetchmail (or another email grabber) with OSX keychain for authentication?

    - by bias
    Every fetchmail tutorial I've read says putting your email account password clear-text in a config file is safe. However, I prefer security through layers (since, if my terminal is up and someone suspecting such email foolery slides over and simply types "grep -i pass ~/.*" then, oops, all my base are belong to them!). Now, with msmtp (as opposed to sendmail) I can authenticate using the OSX keychain. Is there an email 'grabber' that lets me use Keychains (or at least, that lets me MD5 the password)?

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • ports only available from the outside network

    - by ChrisJ
    This is a counter-intuitive problem for me. I have a new Win 2003 server on a static IP address w.x.y.z. Tomcat 7, PostgreSQL 9.1, and Subversion are installed. All of it appears to be working fine from the server itself. We can also access the Tomcat manager, web applications, and run "svn ls svn://w.x.y.z/" from outside our network. However, when I try from another machine in the office, phpPgAdmin and svn cannot establish connections with the server. http://w.x.y.z:5432/phppgadmin cannot connect. The svn command from above returns: svn: E730061: Unable to connect to a repository at URL 'svn://w.x.y.z/' svn: E730061: Can't connect to host 'w.x.y.z': No connection could be made because the target machine actively refused it. Tomcat manager and the other web apps we have deployed work fine. Netstat -a from the server shows this: Proto Local Address Foreign Address State TCP SERVERNAME:3690 SERVERNAME:0 LISTENING TCP SERVERNAME:5432 SERVERNAME:0 LISTENING Windows Firewall was off, but just in case I also tried to enable it and open ports 3690 (svn) and 5432 (postgres). No change. I don't have access to the router/switch because it just doesn't work that way in Port-au-Prince and our sysadmin is on R&R. Is there anything that might be causing the problem from the server side?

    Read the article

  • How to export everything from Firefox to another PC

    - by ianix
    I'm going to do a clean install of Windows 7 in my laptop and one of the thing that I really want the way it is is Firefox. I know I can get the addons easily and the bookmarks also, but I also like to export my browsing history, the Awesome Bar database and my about:config customizations. Is there a way or tool to do that?

    Read the article

  • Hard drive spark, can it be recovered?

    - by user163558
    Alright, so I was going to install Source Film Maker but I didn't have any space, so I decided to connect an HDD via an USB converter(image below). I shut down the machine, turned the PSU off, and connected via a Molex connector & the USB converter. I turned back on the PSU, no sparks or anything, everything normal, but when I turned on the machine, I heard some sizzing(lol?) and sparks flying and a little flame, but the PC was running fine. I pressed the power button instead pulling out the plug (I panicked) so it continued to short circuit for about 10 seconds. There's a very little part on the HDD that become ash, it's near the Molex connector and the circuit is a little black as well. I'm afraid that I will damage the HDD more so I didn't hook up the HDD after all. Do you think it's the PSU(came default with Cooler Master Elite 430, 500W) or it's the HDD(Samsung SP1203N)? P.S: I've attached the HDD same way before(like 3 months ago), and it worked. HDD burn: USB connector: Sorry for the bad image quality, taken with my phone.

    Read the article

  • Prevent master to fall back to master after failure

    - by Chrille
    I'm using keepalived to setup a virtual ip that points to a master server. When a failover happens it should point the virtual ip to the backup, and the IP should stay there until I manually enable (fix) the master. The reason this is important is that I'm running mysql replication on the servers and writes should only be on the master. When I failover I promote the slave to master. The master server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP1 } vrrp_instance APP1 { interface eth0 state EQUAL virtual_router_id 61 priority 999 nopreempt virtual_ipaddress { 217.x.x.129 } smtp_alert } Backup server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP2 } vrrp_instance APP2 { interface eth0 state EQUAL virtual_router_id 61 priority 100 virtual_ipaddress { 217.xx.xx.129 } notify_master "/etc/keepalived/notify.sh del app2" notify_backup "/etc/keepalived/notify.sh add app2" notify_fault "/etc/keepalived/notify.sh add app2” smtp_alert }

    Read the article

  • Inconsistent SMTP Access

    - by Mike Hanson
    I have a mail server setup on Windows Server 2008. All was working fine, until I wanted to map a drive on the server so that I can access files on another machine. Windows prompted me to configure Network Discovery, which I did with the "Home/Office" option rather than "Public". After that, several access points that worked before stopped working, like VNC, SMTP, etc. After reinstalling those packages, things appeared to be working again. Unfortunately, problems have returned with my SMTP server. I can use an web-based SMTP tester, and it connects in 62msec (as expected). However, if I telnet from my machine on the same LAN, it takes more than 20 seconds to connect! When I try to send messages from Outlook, it times out entirely with the message: Sending' reported error (0x80042109) : 'Outlook cannot connect to your outgoing (SMTP) e-mail server. If you continue to receive this message, contact your server administrator or Internet service provider (ISP).' I've checked the firewall settings, I've tried configuring it to use port 587 instead of 25, but nothing gets around this problem. Does any have any useful insights? Thanks in advance!

    Read the article

  • iphone app too read text files.

    - by bandito40
    Hi, Need to edit some of the local text files on my iphone but so far all the apps I have downloaded do not navigate the OS3 file tree for me to load and edit them. I need to do this on my iphone as I can no longer access via ssh or with the iphone cable. One of the files to edit is a ssh config file which is what is not allowing ssh connections. Any ideas on apps or other methods that I could use.

    Read the article

  • Cannot Access Shared Folder From IIS

    - by Tim Scott
    From IIS I need to access a folder on another computer. Both servers are Window 2008 SP2, and they live in a Virtual Private Cloud on Amazon EC2. They reach one another by private IP -- they are in WORKGROUP, not a domain. I can access the shared folder manually when logged in to the client as Administrator. But IIS gets "access denied." Here's what I have done: Set File Sharing = ON Set Password Protected Sharing = OFF Set Public Folder Sharing = ON Shared the folder Added permission to the share: Everyone, Full Control Added permission to the share: NETWORK SERVICE, Full Control Verified that File & Printer Sharing is checked in Windows Firewall Opened port 445 to inbound traffic from local sources I tried adding <remote-machine-name>\NETWORK SERVICE to the share but it says it does not recognize the machine, which makes sense, I guess. As I said, from the other computer I have no trouble accessing the shared folder from my user account, but IIS is shut out. How does the file server even know the difference? I would assume that with Everyone given full control and password protected sharing turned off, it would not matter what the client user account is. In any case, how to solve? UPDATE: To clarify, I am not trying to serve up files on the share directly through IIS. Rather I am writing files to the share from my code (System.IO).

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >