Search Results

Search found 2226 results on 90 pages for 'promise raid'.

Page 35/90 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How can I get my SATA DVDs working again?

    - by user269051
    My hard drive crashed (WinXPpro), so I took a C drive from a broken PC. The new C drive is Win7pro. Motherboard is MSI K8N Neo4 Platinum, with 4 hard drives installed on SATA 1-4 (nForce4 Ultra); the two DVD drives are loaded on SATA 7-8 (Silicon Image SATARAID5). I've tweaked BIOS settings every which way. The closest thing to success was when each DVD had both a CD and a DVD icon, and blinked green. No CD or DVD could be read in either drive. I assume that the problem resulted from the fact that my new C drive does not have the RAID drivers? I've tried loading from the floppy (doesn't work). I can't boot off the DVD/CD, and switching the DVD's SATA cable to the SATA 3 slot (and pulling one of the hard discs) didn't work. I'd like to be able to use the other two available SATA slots for a mirrored RAID drive, and get my DVDs working again. Any suggestions?

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

  • Can I take my ReadyNAS drive in Raid1 and plug it straight into new different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS? Edit: Ok after further reading it seems that the ReadyNAS Duo formats my drive as ext3 in 16k blocks. There are instructions for mounting a drive into a linux box here: Mounting Sparc-based ReadyNAS Drives in x86-based Linux There is also talk about a linux image here: ReadyNAS Data Recovery - VMware recovery tool I'm not sure whether this means they ReadyNAS actually implements software raid under the hood, or what? So it appears like it IS do-able, but do any of you linux guru's know whether this is viable and whether the fact that they are in raid 1 affect matters?

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • SBS 2003 boot stalls at acpitabl.dat

    - by John
    I have a SBS 2003 server running for 3 year without any problems, and few days ago it freezes during the boot. System is using two 500 Gb drives in RAID1 (Intel Matrix 7.5) After trying to load in safe mode, boot stops on acpitabl.dat. First idea was that there is a problem with RAID altough disk status was OK, and RAID status was Rebuild. I tried to boot with each drive, and one gives me the same problem, and the other drive is failing to load. Took both drives out, and checked it on a different machine. One drive is dead, other is without any problems. Returned the good drive back in SBS 2003 with changed status to Degraded, but the problem is still the same. I also have a clean SBS 2003 copy installed on this drive (previous installation), which loads smooth and quick. So, I believe the main problem is this installed version of SBS 2003. Did not make any hardware changes, did not make any updates (not sure about any automatic windows updates lately). Since there are tons posts about this problem, and no clear solution, I am trying to figure how to repair SBS 2003 installation, since there are some installed programs on this installation which I cannot re-install without additional issues.

    Read the article

  • Converting software RAID1 to RAID10 for /boot

    - by luckytaxi
    Array info: /dev/md0 - /dev/sda1 and /dev/sdb1 /dev/md2 - /dev/sda2 and /dev/sdb2 Partition info: /boot - /dev/md0 / - /dev/md1 I have two drives that are setup as RAID1 using software RAID on Redhat. I added two additional drives (same size) and I would like to conver the RAID1 to a RAID10. The problem I'm having is adding the last drive to the array. I've gotten as far as creating a RAID10 with two missing devices but as soon as I add the last drive, all hell breaks loose. It seems /dev/sda1 is the culprit. What I'm not too sure about is how to create the RAID10. I've tried the following mdadm --create /dev/md2 --level=raid10 --raid-device=4 /dev/sdc1 missing /dev/sdd1 missing I then proceeded to fail /dev/sdb1 from /dev/md0 and added that partition to /dev/md2. I proceeded to install the MBR on EACH partition since boot resides on /dev/sdx1 on each drive. As a test, all is well, I'm able to boot back into the system once I do a quick reboot. Now, when I go add the last drive /dev/sda1, it breaks. I attempted to install grub on /dev/sda1 and I get the following ... grub> root (hd0,0) /dev/sda root (hd0,0) /dev/sda Filesystem type is ext2fs, partition type 0xfd grub> setup (hd0) setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... no Error 2: Bad file or directory type At this point, the array is hosed I believe. I rebooted the server and it refuses to boot.

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • New PC: win7 install issues with 2x3TB in RAID1

    - by goober
    Background / Components Not sure what I've done wrong. Built one PC before successfully in a similar way but this one seems to be struggling. I have the following components: ASUS P8Z68-V/Pro Gen 3 (updated to latest firmware) 16GB (2x8GB) RAM corsair hx850 power supply 2x3TB drives on the intel z68 controller 1x128GB SSD on the Marvel controller sapphire 7950 The problem I set up my 3 TB disks in RAID1 controller appears to recognize them fine during boot as one 2.7TB raid1 volume windows setup sees two disks, both 746 GB, but will only let me install to one and appears to work fine. windows appears to install fine after installer reboots, I receive "windows failed to start" error referencing code 0xc000000e and "\Windows\system32\winload.exe every time I do an install, a new additional "win7" entry is added to the boot menu; all lead to this error. What I've tried: updated the BIOS to the latest firmware attempted to repair the install tried clearing / removing raid / re-raiding drives tried formatting the drives during install attempted to clear the menu Of entries (can't figure out how to do that) No matter how many times I destroy the raid array, format the disks, etc. the boot entries keep piling up. Any idea where I'm going wrong?

    Read the article

  • Creating a partitioned raid1 array for booting a debian squeeze system

    - by gucki
    I'd like to have the following raid1 (mirror) setup: /dev/md0 consists of /dev/sda and /dev/sdb I created this raid1 device using mdadm --create --verbose /dev/md0 --auto=yes --level=1 --raid-devices=2 /dev/sda /dev/sdb This gave a warning about metadata being 1.2 and my system might not boot. I cannot use 0.9 because it restricts the size of the raid to 2TB and I assume grub shipped with latest debian (squeeze) should be able to handle metadata 1.2. So then I created the needed partitions like this: # creating new label (partition table) parted -s /dev/md0 mklabel 'msdos' # creating partitions sfdisk -uM /dev/md0 << EOF 0,4096 ,1024,S ; EOF # making root filesystem mkfs -t ext4 -L boot -m 0 /dev/md0p1 # making swap filesystem mkswap /dev/md0p2 # making data filesystem mkfs -t ext4 -L data /dev/md0p3 Then I mounted the root partition, copied a minimal debian install inside and temporary mounted /dev /proc /sys. Afer this I chrooted to the new root folder and executed: grub-install --no-floppy --recheck /dev/md0 However this fails badly with: /usr/sbin/grub-probe: error: unknown filesystem. Auto-detection of a filesystem of /dev/md0p1 failed. Please report this together with the output of "/usr/sbin/grub-probe --device-map=/boot/grub/device.map --target=fs -v /boot/grub" to I don't think it's a bug in grub (so I didn't report it yet) but a fault of mine. So I really wonder how to properly setup my raid1, everything I tried so far failed.

    Read the article

  • Cannot get at data in my NAS

    - by Ben
    I've got a bit of an issue that I'm hoping you can help me with. I have an Iomega ix4 as my NAS. This runs Linux and each drive in the box has 2 partitions: one for the OS and RAID info, and the second for the actual data. I had it configured as RAID5. Recently one of the drives failed. At this point all of the data was available, it was just reporting a failed drive. I had a drive of the same capacity (although not the exact same spec) which I swapped in place of the failed drive. It recognised it, and started to rebuild the data protection. So far so good ... or so I thought. The next day, after data protection had finished reconstructing, the NAS was telling me that 4 new drives had been added, and wanted confirmation to overwrite the data. Obviously I declined to do this. I swapped the failed drive back in again, in the hope that it would return to its previous state of the data being accessible, but one failed disk. However it didn't - it still tells me that the NAS has 4 new drives in it. I am hopeful that the actual data is untouched, so what I need to do is get it to rebuild the RAID without touching the data on the disks. I have ssh access, and have run stuff like mdadm --examine to see what I can find. The mdadm.conf file has no entry in the "definitions of existing MD arrays" section. I have not run any actual rebuilding commands as yet, because this is entering an area which I am out of my depth in. Please can someone advise the best way of getting my data? Thanks.

    Read the article

  • Stop Windows 7 from accessing or writing to hard drive unless "told" to by me? (More info inside...)

    - by Jeff
    A confusing question, perhaps, but bear with me. I have two internal HDDs set up in a RAID0 array which I use as mass storage. I access the drive very infrequently (once a day at most) and so I have set up Windows 7's power options to turn off idle disks after only 1 minute. This is fine, and the disks are turned off most of the time. However, I notice that Windows sometimes spins up the drives when I really, really don't want or need it to. This causes a 30 second delay as both drives spin up and lock up my system. Some examples of when this happens: 1) When I'm installing something using Windows Installer or Installshield; it seems to me as if they're using the drive with most available free space as the installer cache location... so my big RAID drive has to spin up! Most annoying. 2) Apparently, when I open a Java-based program which resides on my system drive and has nothing to do with my RAID drive! 3) At boot-up and shut-down time. At shutdown the drive spin up only for the computer to immediately shut down! Incredibly frustrating! I've already tried changing the letter of the drive, and at some points have removed the drive letter entirely, which solves the first two issues above. So my question (FINALLY!) is this: is there any way I can mark this drive as being for "storage only", so Windows basically does not see it at all until I actually invoke it somehow? Or is there any way I could set it up so that only specific programs have write access to it? For example, download managers, TeraCopy, etc. etc.? Basically I want it to be a "ghost drive" until I'm ready to use it and to stop Windows from spinning it up all the damn time! Thank you. :)

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • 20 1TB drives vs. 10 2TB drives in RAID5/6 server

    - by Hunter
    Hi everyone, I will be setting up a server at work and I need some advice on some details. The setup will be one blade-type server (8-core, 16GB RAM) with two subsystems - one for the main storage the other to back it up. I'm shooting for a 20TB array (I know it'll be less after formatting and parity drives). So is there any advantage one way or the other with either 20 1TB drives or 10 2TB drives? I'm not sure right now how many controllers I should have either (in the quote I have is a dual-port controller). I would think two controllers for a server of this size would be a better choice than the dual-port controller (but I really don't know). And would an array of this size have any performance issues in RAID 5 or 6 (I know RAID 5 or 6 are "slower" because of all the parity calculations). Also, these will be either WD RE3 (1TB) or the RE4 (2TB). Oh, also, for the backup array would it be ok to use the WD 2TB green drives (also in RAID5 or 6)?

    Read the article

  • mdadm lvm and ext4 slowness - How can I speed it up?

    - by beatbreaker
    I can't figure out why I'm getting such terrible times out of my mdadm and in particular the lvm partitions in it. I made the raid: mdadm --create --verbose /dev/md0 --level=5 --chunk=1024 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1] 2930279424 blocks level 5, 1024k chunk, algorithm 2 [4/4] [UUUU] I then created the physical volume, volume group, and logical volumes, I then formatted the logical volumes to ext4 using the following commands I got from here: http://busybox.net/~aldot/mkfs_stride.html mkfs.ext3 -b 4096 -E stride=256,stripe-width=768 /dev/datavg/blah Now I'm confused, I had these lvs running real quick before in mdadm but now that I've 'optimized' everything it's slower, eg, before: /dev/datavg/lv_audio: Timing buffered disk reads: 598 MB in 3.01 seconds = 198.85 MB/sec but now after: /dev/datavg/audio: Timing buffered disk reads: 198 MB in 3.00 seconds = 65.96 MB/sec That's pitiful! What's happened here? Did I not follow the instructions correctly? Can i reshape the ext4 partitons to default back to what they were? (I used defaults before and they were fine!)

    Read the article

  • Server RAID 5 failed...all I have left is my compiled website

    - by David Murdoch
    Yesterday, 2 of the 3 drives in my dev server's RAID 5 decided to die on me (with no warning). I've come to grips with the fact that my data is most likely lost unless I shell out some major bucks for professional data-resortoration. People, don't be an idiot like me and treat your RAID as a data backup! Luckily I published the site about 4 hours before my files went bye-bye. Is there any way to run some [magical] program to restore my compiled site to their original files? Also: I develop on one machine with the files stored on the server...is there some visual studio 2010 web cache on my local machine (the one that didn't crash) that I may be able to use?

    Read the article

  • megacli forceWB

    - by Pascal den Bekker
    We are using a raid controller: LSI Logic / Symbios Logic MegaRAID SAS 2008 But there is no BBU on Board, is there a way to force the WB Cache ?? I tried following: - /opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp CachedBadBBU -L0 -a0 Failed to set Write Cache OK if bad BBU on Adapter 0, VD 0. FW error description: The requested command has invalid arguments. Can anyone help? Cheers, Pascal

    Read the article

  • Is Splitting IDE hdds between Primary and Secondary faster?

    - by earlz
    Hello, I'm doing RAID 0 on two IDE harddrives (yes, this is old hardware). Will the harddrives be faster if I attach them separately so that one is on the Primary IDE controller and the other is on the Secondary IDE controller? Or would it just be as good as having them both on the Primary IDE as master and slave?

    Read the article

  • Adaptec 5805 after reboot don't starting

    - by Rakedko ShotGuns
    After rebooting the system, the controller is not included. It only works if the computer is shut down and turn off. Late i update firmware "Adaptec RAID 5805 Firmware Build 18948" How to fix the problem? add Log Configuration summary Server name.....................raid_test Adaptec Storage Manager agent...7.31.00 (18856) Adaptec Storage Manager console.7.31.00 (18856) Number of controllers...........1 Operating system................Windows Configuration information for controller 1 ------------------------------------------------------- Type............................Controller Model...........................Adaptec 5805 Controller number...............1 Physical slot...................2 Installed memory size...........512 MB Serial number...................8C4510C6C9E Boot ROM........................5.2-0 (18948) Firmware........................5.2-0 (18948) Device driver...................5.2-0 (16119) Controller status...............Optimal Battery status..................Charging Battery temperature.............Normal Battery charge amount (%).......37 Estimated charge remaining......0 days, 16 hours, 12 minutes Background consistency check....Disabled Copy back.......................Disabled Controller temperature..........Normal (40C / 104F) Default logical drive task priorityHigh Performance mode................Dynamic Number of logical devices.......1 Number of hot-spare drives......0 Number of ready drives..........0 Number of drive(s) assigned to MaxCache cache0 Maximum drives allowed for MaxCache cache8 MaxCache Read Cache Pool Size...0 GB NCQ status......................Enabled Stay awake status...............Disabled Internal drive spinup limit.....0 External drive spinup limit.....0 Phy 0...........................No device attached Phy 1...........................No device attached Phy 2...........................No device attached Phy 3...........................1.50 Gb/s Phy 4...........................No device attached Phy 5...........................No device attached Phy 6...........................No device attached Phy 7...........................No device attached Statistics version..............2.0 SSD Cache size..................0 Pages on fetch list.............0 Fetch list candidates...........0 Candidate replacements..........0 69319...........................31293 Logical device..................0 Logical device name............. RAID level......................Simple volume Data space......................148,916 GB Date created....................09/19/2012 Interface type..................Serial ATA State...........................Optimal Read-cache mode.................Enabled Preferred MaxCache read cache settingEnabled Actual MaxCache read cache setting Disabled Write-cache mode................Enabled (write-back) Write-cache setting.............Enabled (write-back) Partitioned.....................Yes Protected by hot spare..........No Bootable........................Yes Bad stripes.....................No Power Status....................Disabled Power State.....................Active Reduce RPM timer................Never Power off timer.................Never Verify timer....................Never Segment 0.......................Present: controller 1, connector 0, device 0, S/N 9RX3KZMT Overall host IOs................99075 Overall MB......................4411203 DRAM cache hits.................71929 SSD cache hits..................0 Uncached IOs....................29239 Overall disk failures...........0 DRAM cache full hits............71929 DRAM cache fetch / flush wait...0 DRAM cache hybrid reads.........3476 DRAM cache flushes..............-- Read hits.......................0 Write hits......................0 Valid Pages.....................0 Updates on writes...............0 Invalidations by large writes...0 Invalidations by R/W balance....0 Invalidations by replacement....0 Invalidations by other..........0 Page Fetches....................0 0...............................0 73..............................10822 8...............................3 46138...........................4916 27184...........................15226 20875...........................323 16982...........................1771 1563............................5317 1948............................2969 Serial attached SCSI ----------------------- Type............................Disk drive Vendor..........................Unknown Model...........................ST3160815AS Serial Number...................9RX3KZMT Firmware level..................3.AAD Reported channel................0 Reported SCSI device ID.........0 Interface type..................Serial ATA Size............................149,05 GB Negotiated transfer speed.......1.50 Gb/s State...........................Optimal S.M.A.R.T. error................No Write-cache mode................Write back Hardware errors.................0 Medium errors...................0 Parity errors...................0 Link failures...................0 Aborted commands................0 S.M.A.R.T. warnings.............0 Solid-state disk (non-spinning).false MaxCache cache capable..........false MaxCache cache assigned.........false NCQ status......................Enabled Phy 0...........................1.50 Gb/s Power State.....................Full rpm Supported power states..........Full rpm, Powered off 0x01............................113 0x03............................98 0x04............................99 0x05............................100 0x07............................83 0x09............................75 0x0A............................100 0x0C............................99 0xBB............................100 0xBD............................100 0xBE............................61 0xC2............................39 0xC3............................69 0xC5............................100 0xC6............................100 0xC7............................200 0xC8............................100 0xCA............................100 Aborted commands................0 Link failures...................0 Medium errors...................0 Parity errors...................0 Hardware errors.................0 SMART errors....................0 End of the configuration information for controller 1 List item

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >