Search Results

Search found 1607 results on 65 pages for 'disks'.

Page 6/65 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • In Windows 7 power management, is it possible to set different sleep settings for different SATA disks?

    - by Ben Voigt
    I'm having an issue with Windows 7 either freezing up or generating a BSOD coming out of sleep. I suspect that it is related to my boot/OS drive, an OCZ Vertex SE SSD, because numerous other Vertex users have reported sleep problems. Notably, if I put the computer to sleep, it almost always wakes correctly. If it goes to sleep after a timeout, it almost always BSODs. I disabled timed sleep and now it freezes when left unattended. My next step is to disable "Put hard disks to sleep after X minutes", but I'd like to change this setting only for the SSD and not for the rotating data disks, which I would like to spin down normally. Does anyone know a place to configure sleep on a per-disk basis? I don't need to set different timeouts on different disks (although that would be nice), simply setting "this disk sleeps" and "sleep is disabled for this disk" would be great. Additional system information: Windows 7 Ultimate x64, Core i5 - P55 chipset, Intel RST drivers are installed. One SSD, two rotating HDD, and a DVD-RW drive are all connected to the Intel SATA ports. I could potentially move some of these to my motherboard's other SATA controller if that would help.

    Read the article

  • How are large companies handling the storing and cataloging of software installation disks?

    - by CT
    I just started working in the IT department of a small-medium sized construction company with about 200 users. One of my responsibilities is to setup and configure all new machines that come in. I would like suggestions on how to best manage the installation disks and licenses of the software that comes with them. Plus any additional licensed software such as Autocad, Photoshop, etc as well as peripheral driver disks such as printers and scanners. Right now every machine is associated and labeled with an asset id. All asset ids are kept in a spreadsheet with applicable serial numbers, current user, warranty info, and software licenses. The physical disks are then kept within a folder in a cabinet. Each folder is marked with the asset id number as well as the current user. My problems with this is that the system was not maintained very completely before I came to the company. There are plenty of software folders with no asset ids labeled on them. Plenty of missing software folders (most likely are a lot of the unlabeled folders). Folders with names but not asset ids. Machines get switched to different users without the folders and spreadsheet being updated. I am not saying this method would necessarily be bad if it was better implemented and managed, but if I am going to have to take a lot of time to fix this system currently in place. I thought I would ask the community first on how others manage this process in case there are easier, more efficient ways of doing so. Thank you.

    Read the article

  • How to partition and format multiple disks using a batch script?

    - by chandu
    I am trying to format 'n' number of disks using a batch script. My script goes like this. diskpart /s "abc.txt" where abc.txt is: sel disk 1 create part primary format FS=NTFS label=label2 quick compress My Problem here is I want to 'loop' the commands in abc.txt for the number of disks that exists. But I cannot send an argument like %1 to abc.txt file as it is a .txt file. and my diskpart /s can only take a .txt file as an argument. how to overcome this... could anybody please help?

    Read the article

  • are blue ray disks the cheapest storage medium per Gb?

    - by oshirowanen
    The question is as simple as that really. Are bluray disks the cheapest storage medium per gb? I am recording video which is using about 32gb per day. So a month of that would be almost 1Tb. A year around 12tb. I want to store at last a years worth with the possibility of more if needed. To me it seems that cheap bluray disks world be the cheapest solution. But I wanted to get this confirmed.

    Read the article

  • VMWare Workstation Dev Machine Disks: one fast or four echofriendly raid?

    - by Avi
    I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines - A dev machine running VS-2010, a build machine, a version-control machine, a web server for testing, a "personal" machine running office etc. I'll be connecting the computer to my stereo, so I'll also be running iTunes (possible on a dedicated VM) and I want the computer to be a silent one. I'll probably use an Antec P183 case. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks. So, my question is as follows: In terms of heat, noise, reliability, warranty, price, capacity and performance, what would you suggest: A Raid10 4 disk array using eco-friendly disks such as the $94 1TB Western Digital Caviar Green, or one high performance disk such as the 2TB Western Digital Caviar Black at $280?

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Windows server's HDD Spin down daily/nightly - Does it makes sense?

    - by Riccardo
    A Windows Server 2003 R2 has the following hard disk configuration: - 3 internal hard disks attached to a 3Ware unit, configured in Raid 1 + spare unit - 3 external USB backup disks: 2 Verbatim 1TB (Samsung HD103SI) + 1 Western Digital 1TB (WD10EADS) The server runs 365 days per year, h24, however: - at daytime the server/user usage is limited to the internal hard disks - at nighttime there's no user usage, apart from scheduled maintenance tasks, basically the Server will be idle from 7PM to 8AM. apart from nighly backups (few hours). I was wondering if: (a) it makes any sense let Windows manage power savings, allowing disks to spin down accordingly, ** OR** let the disks stay awlays-on, to avoid permature wearing, due to continuous spin up/down (b) leave internal disks always on, and force external disks to power down while idle (this requires third party tools, such as Verbatim's Green button utility) Your thoughts?

    Read the article

  • High disk time on sql-server

    - by Patrik
    Hi We have a dedicated sql-server 2008 r2 enterprise edition. The setup is: D: (data files) - stored on local ssd disks (not the same disks as log files) (raid 10) E: (log files) - stored on local ssd disks (not the same disks as data files) (raid 1) F: (transaction log backup) - stored remote on a SAN Today we moved our log files to new disks (from F: to E:). From a shared volume ( F:(SAN)) to dedicated local disks (E:). What then happend was that the "disk time", "avg. transfer time" and "avg disk write queue length" increased on the volume where we have the data files (D:) (not on the volume where the log files are located). The data volume and log volume does not share disks, however they share the same controller card. "Disk idle time" is low for all volumes. One thought is ofcourse that the controller card might be overloaded. But, we need more ideas on where the problem might be.

    Read the article

  • Can't start system due to mdadm failing

    - by user101212
    I used to have a 5-disk RAID5 partition, all working very well. I have then decided to add 3 more disks on it, totaling 8 equal disks. I've opened Webmin and just asked to add the disks. Then I've realized the three disks had NTFS partitions, wich mdadm didn't complain, so I tried to stop the growing to remove the Windows partitions. I've tried to remove a disk using the same Webmin, but (as you might guess and call me fool...), the system became unstable. By restarting the system, I've started receiving these messages: "udev[126]: timeout: killing '/sbin/mdadm --incremental /dev/sdh1' [311]" "udev[124]: timeout: killing '/sbin/mdadm --detail --export /dev/md0' [316]" I've formated the system disk, hoping to get a system up and running. I did that with all RAID disks disconected, so everything was fine. I then reconnected the disks, wich was also ok. And finally installed mdadm using apt-get. By reboot, the system has found the mdadm intention of growing the system, so the same messages appear again. I other words: I can't even reach a command prompt to do something. Any ideas of what to do? I believe I could turn off the system, disconnect the disks and look for the mdadm.conf file. Would that be a good idead? I'm no Linux expert, so I'm really lost here.

    Read the article

  • Resize Win2003 system+boot partitions to bigger disks & different controller?

    - by ane
    Have an old Win2003 server with 1 SCSI hard drive partitioned as follows: D: boot (includes D:\ntldr, boot.ini, etc.) C: system (includes C:\WINDOWS) Want to move the whole system to new hardware with bigger drives and different controllers. Specifically, C: to a 300GB SAS drive, and D: to a 2TB SATA drive. Tried: VMWare Converter - VMWare Server - Diskpart Result: Diskpart refuses to resize system or boot disks VMWare Converter - VMWare Server - GParted Result: Will not boot (see http://serverfault.com/questions/219868/resize-ntfs-system-partitions-with-gparted ) Attach original VMWare disk to a duplicate VMWare install - Diskpart Result: Will not boot (goes to Directory Services Restore mode) Backup Exec System Recovery Server Edition 2010 with Restore Anywhere (tried restoring both to VMWare and to the bare system, without VMWare) Result: Windows Boot error: Could not read from the selected boot disk. Check boot path. Supposedly this is a boot.ini problem, so I try bootcfg /rebuild from the recovery console. Says it can't find windows partition so it can't rebuild. Thought about Ghost but it's completely different hardware/controllers that we're restoring to, so I doubt it would boot. Reinstalling Windows from scratch is not an option due to critical custom software heavily embedded on the original machine. Has anyone been in a similar situation (with unusual boot/system partitions) before and figured out how to resize onto different disks?

    Read the article

  • How many disks is too many in this RAID 5 configuration??

    - by Tom
    HP 2012i SAN, 7 disks in RAID 5 with 1 hot spare, took several days to expand the volume from 5 to 7 300GB SAS drives. Looking for suggestions about when and how I would determine that having 2 volumes in the SAN, each one with RAID 5, would be better?? I can add 3 more drives to the controller someday, the SAN is used for ESX/vSphere VMs. Thank you...

    Read the article

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • Reasonable size for "filesystem reserved blocks" for non-OS disks?

    - by j-g-faustus
    When creating a file system ( mkfs ...) the file system reserves 5% of the space for its own use because, according to man tune2fs: Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. But with large drives 5% is quite a lot of space. I have 4x1.5 TB drives for data storage (the OS runs on a separate disk), so the default setting would reserve 300 GB, which is an order of magnitude more than the the entire OS drive. The reserved space can be tweaked, but what is a reasonable size for a data disk? Can I set it to zero, or could that lead to issues with fragmentation?

    Read the article

  • RAID 10 not being found by installer

    - by dko
    I had ubuntu installed with raid 0 enabled. I have added 2 more disks went into the bios deleted and created a new raid setup using raid 10 (total of 4 disks now). However during install of ubuntu server it asks if it should activate the RAID Sata disks, I tell it yes. Next step shows up blank for available disks when determining where to mount the root etc. Anyone have a clue as to why this would be?

    Read the article

  • How to disable automatic and forced fsck on disks in a linux software raid?

    - by mit
    This is the /etc/fstab entry of a raid system /dev/md4 that is controlled with mdadm and webmin on an ubuntu 10.04 64 server: /dev/md4 /mnt/md4 ext3 relatime 0 0 We tried to switch off automatic forced fsck on rebotts, as we prefer to implement our own scheduled fsck routine by setting the last parameter of the line to 0 (ZERO). But we found out the forced and automatic check still occurs on the underlying real disks, lets say sdb1 and sdc1. How can we switch that off?

    Read the article

  • Program for remove exact duplicate files while caching search results

    - by John Thomas
    We need a Windows 7 program to remove/check the duplicates but our situation is somewhat different than the standard one for which there are enough programs. We have a fairly large static archive (collection) of photos spread on several disks. Let's call them Disk A..M. We have also some disks (let's call them Disk 1..9) which contain some duplicates which are to be found on disks A..M. We want to add to our collection new disks (N, O, P... aso.) which will contain the photos from disks 1..9 but, of course, we don't want to have any photos two (or more) times. Of course, theoretically, the task can be solved with a regular file duplicate remover but the time needed will be very big. Ideally, AFAIS now, the real solution would be a program which will scan the disks A..M, store the file sizes/hashes of the photos in an indexed database/file(s) and will check the new disks (1..9) against this database. However I have hard time to find such a program (if exists). Other things to note: we consider that the Disks A..M (the collection) doesn't have any duplicates on them the file names might be changed we aren't interested in approximated (fuzzy) comparison which can be found in some photo comparing programs. We hunt for exact duplicate files. we aren't afraid of command line. :-) we need to work on Win7/XP we prefer (of course) to be freeware TIA for any suggestions, John Th.

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Tips on self-learning boot-time fundamentals (grub, disks, partitions, LVMs, etc)?

    - by Harry
    Is there any good resource which I can use to self-learn all the low-level system administration details on Grub, Grub2, disks, partitioning, LVM, etc? I'm comfortable with system admin tasks post-boot but I lack knowledge about both the fundamentals and actuals of all that happens during boot on a Linux system such as Fedora. Any recommendations on how to setup a testbed on my desktop for learning the above? I may not be able to get another machine / harddisk, so may have to rely on something like VirtualBox. But don't know if there are other (better) options... so asking for tips from those who have self-learned / mastered this track themselves.

    Read the article

  • How to re-do the hard disks in a WD Word Book Edition II ?

    - by jfmessier
    I recently purchased a WD World Book II, a 2 TB one. I call it the "White Box". It has those 2 1TB drives, and they were in this RAID 1 config, only giving me about 1 TB. I could not delete the raid array, and I took the drives in a Linux box. But I also deleted the entire partitions of the disks, and I cannot even et the existing RAID array on this WD White Box. The drives are fine, but I cannot get them to work on the WD White Box. My goal was to get back to a real 2 TB storage space. If I cannot get those drives back in the White Box, I can re-use them elsewhere, but this would mean a waste of the firmware and network connection. After the fact, I read that, anyway, the network performance is rather poor. Thanks :-)

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • How do I know if my disks are being hit with too many I/O reads or writes or both?

    - by Mark F
    I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. How do I really know what the max I/O numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an I/O bound problem. Is there a better way to predict when I/O is reaching its limits? This article was interesting but not giving the answer I desire. So, is my best bet surrounding just looking at 'CPU I/O WAIT'? There must be a more reactive method than this.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >