Search Results

Search found 25442 results on 1018 pages for 'disk size'.

Page 108/1018 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • SQL Server Replication Agent priority

    - by Wikser
    Every hour a server replicates SQL server data with some external web server. During this time, which takes about 2-5minutes, the database seriously slows down. Colleagues, which work with the front end applications of that on another terminal server, even regularly start complaining. The databases are also synchroniously mirrored (via SQLServer mirroring, no replication) to a third server. Note that 99% of the data is replicated outgoing, so the server should rarely need to update its data. As the (merge and transactional) replication tasks are not time-critical, I would like to reduce their priority or somehow slow them down, so they don't affect the database performance that much. How would you implement that?

    Read the article

  • Why does dstat show zeroes for disk activity on my virtual private server running Ubuntu?

    - by Jonathan Berger
    I'm trying to monitor the number of disk reads and writes on my VPS (Rackspace in this case) running Ubuntu 9.04. I realize there are many tools to do this, but when using dstat 0.7 I tried the following command: dstat -d The output is just two columns of zeroes even when I upload a large file via scp that should be causing a large number of disk writes. Why is this, and how do I get dstat to correctly display the number of disk reads and writes?

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • losetup does not decrypt device in Ubuntu 11.4 as before

    - by Kay
    I had an external volume mounted using losetup for about two years. It was created using Ubuntu 9.4 and I used the same Ubuntu installation throughout all dist upgrades. Now as I bought a new laptop I set up a fresh Ubuntu 11.4 installation on it. Problem is: losetup -e twofish /dev/loop0 /dev/sdb2 does not decrypt the volume anymore. The data in /dev/loop0 contains apparently random data. I am sure I entered the correct password. I modprobe'd cryptoloop and twofish. My question is: Has Canonical done some obscure changes to losetup like adding a salt? Does losetup depend on configuration files I did not know about? How can I decrypt the volume on my now laptop?

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • SSH with authorized_keys to an Ubuntu system with encrypted homedir?

    - by Josh
    I recently set up a new server with Ubuntu karmic 9.10, and when I created my home directory I chose to make it encrypted. Now, after loading my authorized_keys file into ~/.ssh, it isn't recognized because my home directory isn't decrypted until after I log in. Is there a way to make SSH keys work with encrypted home directories under Ubuntu?

    Read the article

  • Missing HDD space - says 65GB used, selecting all folders shows 40GB used

    - by Igor K
    Hi Running Windows Server 2008, 74GB raptor drive and noticed we only had about 500MB left - yikes! So deleted some old backups we don't need, but can't track down where about 25GB seems to be taken up. If I go to C: and select all folders and go to properties, this comes to around 40GB but in My Computer I can see 65GB is used. How can I find out whats eating the space? Just IIS + MSSQL Express + Smartermail on the server EDIT Checked to show hidden folders plus protected operating system files - 41.8GB usage, so 24.6GB is missing somewhere. Theres no system restore even installed on the server

    Read the article

  • Encrypt remote linux server

    - by Margaret Thorpe
    One of my customers has requested that their web server is encrypted to prevent offline attacks to highly sensitive data contained in a mysql database and also /var/log. I have full root access to the dedicated server at a popular host. I am considering 3 options - FDE - This would be ideal, but with only remote access (no console) I imagine this would be very complex. Xen - installing XEN and moving their server within a XEN virtual machine and encrypting the VM - which seems easier to do remotely. Parition - encrypt the non-static partitions where the sensitive data resides e.g. /var /home etc. What would be the simplist approach that satisfies the requirements?

    Read the article

  • How do you passthrough native SATA drives to a guest on ESXi?

    - by John
    I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled. I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration. I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller. Any ideas how I can configure one of my guests to get at the raw data on these drives?

    Read the article

  • FDE with DiskCryptor V1 (TRIM Support) on Crucial M4 SSD on W7 x64 - Unpartitioned Space?

    - by JamesM
    I have a new Alienware m11 laptop with a brand new Crucial M4 128GB SSD. I have installed the SSD but not used it yet. I am thinking of using FDE with DiskCryptor 1.0.732.111. I will install Win7 Pro x64. I have read about support for TRIM in v1.0.732.111 but also about leaving unpartitioned space. My SSD drive, 'Crucual M4' does not have a manufacturer built-in reserve like other drives. My questions are: Should I leave some free unpartitioned space with v1 of DC and W7x64 even though it will support TRIM or should I not do this? Should I install DC v1 first or after installing Windows 7 (assuming it is brand new, never been used SSD)?

    Read the article

  • Nesting TrueCrypt File Volumes

    - by Maxim Z.
    Is it possible to nest TrueCrypt file volumes? In other words, if I create a TrueCrypt volume as a file and store it inside another TrueCrypt file volume, will it work? (As for what I'm going to store there, I don't know; I'm just experimenting with TrueCrypt.)

    Read the article

  • Why do "ls" in UNIX and "dir" in DOS have different names?

    - by bizso09
    Why do they have different names for the same command, listing a directory? Surely, they could have talked to each other and agreed on one common name, such as for example cd which is the same for both unix and dos. This decision to have different names has created many headaches for developers and users and also increased incompatibility between the two systems. Did they do it on purpose? Then how come "cd" is the same?

    Read the article

  • How to properly remove disk from PERC 6/i RAID controller ?

    - by Stefano Borini
    I have a Dell T710, coming with PERC 6/i RAID controller. The current raid has 2x500 GB hard drives (with the OS), and 6x1000 GB hard drives (in RAID-6, currently empty). I would like to take one 1000 GB disk physically out to keep as an immediate spare in case of a crash, and configure the remaining 5x1000 GB in a single VD RAID-6. This is all nice and clean and works, until I realized that the display on the machine reports the lack of the 8th disk as an error. It's marked as error, but appears to be a warning, since the machine is fully functional. My question is: what is the best way to keep one disk as a spare out of the array? should I disassemble the disk from the cradle and insert the empty cradle in the array ? Or should I just silence the error in the display in some way (how?). I know that what I am doing sounds pretty strange, but here is academia and having a spare disk available could take weeks. Better to have one ready in my drawer for any emergency.

    Read the article

  • Security of BitLocker with no PIN from WinPE?

    - by Scott Bussinger
    Say you have a computer with the system drive encrypted by BitLocker and you're not using a PIN so the computer will boot up unattended. What happens if an attacker boots the system up into the Windows Preinstallation Environment? Will they have access to the encrypted drive? Does it change if you have a TPM vs. using only a USB startup key? What I'm trying to determine is whether the TPM / USB startup key is usable without booting from the original operating system. In other words, if you're using a USB startup key and the machine is rebooted normally then the data would still be protected unless an attacker was able to log in. But what if the hacker just boots the server into a Windows Preinstallation Environment with the USB startup key plugged in? Would they then have access to the data? Or would that require the recovery key? Ideally the recovery key would be required when booted like this, but I haven't seen this documented anywhere.

    Read the article

  • how to install debian from a rescue cd (via ssh)

    - by tommy
    situation: server with RAID 1 (2x1000GB) currently logged in via SSH (network based debian rescue cd) need to accomplish: install a debian based Xen (maybe with: http://wiki.xen.org/xenwiki/LiveCD ?) keep RAID 1 problem: I have no physical access to the server, so i can't just drop in a cd or plug-in a usb drive. Does anyone have an ideas (or a tutorial handy) on how I can mount the LiveCD (on a read-only rescue-cd??) and the install the distru without breaking the RAID?

    Read the article

  • What happens to a Windows Dynamic Drive

    - by GruffTech
    In my windows (Windows 7) I have two primary logical volumns, One is on a SSD harddrive for my operating system and installed software, and my other is a Dynamic volume for my stored media. (I do alot of work with HD footage) I have all my media on the original DV tapes for backup purposes, just having them all available on my harddrive at all times is a major convience for me, well worth the few hundred dollar investment those 2TB drives were. Anyway, Long story short is my windows install has become problematic and I want to reformat windows. Does this, or will this effect my dynamic drive in any way? I've got almost 3 TB of video on there and i really dont want to re-import all my DV tapes.

    Read the article

  • How to see whats taking up space on your hard drive

    - by sam
    Im thinking of switching to one of the macbook pro retinas with a 256gb ssd, im making the move from a 512gb hd, which at the moment is split into a 60bg windows partition (which i dont think im going to have on my new machine) and a 440gb main partition of the main partition ive got 140gb free. So all in if i disregard the 60gb windows part ive got about 200gb free so im using 300gb, which is still a bit to much, ive had this machine 4/5 yrs so its likely to be clogged with files that i no longer need, is there a tool that i can use to view my current hd and see whats taking up the most room, so i can begin to see where i can cut down ? just for a bit of background my current machine runs osx.

    Read the article

  • How to Get the Folder Name of USB Disk?

    - by Kate Moss' Open Space
    When an USB Disk plugs into CE/Mobile based device, how do you know the folder name of the mounting point? Usually, it should be "USB Disk" but it is really depends on how OS image builder; they may change the folder name for whatever reason. FindFirstFlashCard looks simple and promising, the drawback is it only available on Windows Mobile. In fact, these find flash card API set will enumerate all of the mountable file system which includes SD card, CF and etc that we don't expect to get. So I am going to introduce you another way via Storage Manager. Here is the steps.

    Read the article

  • FoxPro 2.6 DOS on Windows 7 64-bit

    - by Rolando
    I support a company that has a very old, mission critical, FoxPro for DOS 2.6 (FPD) application. For variuos reasons the company didn't adapt/migrate their app, which, ironically, has been running even better under Windows XP (and 32-bit Win7) because the OS allowed new features like more reliable networking, distributed printing, email integration. Unfortunately for this company, most new machines now come with a 64-bit version of Windows 7, which is incompatible with their FPD app. I know this time the writing is on the wall: the only long-term solution is to migrate their app. But I wonder if anyone can suggest a temporary alternative path, which doesn't involve either: downgrade 64-bit Windows to 32-bit, or run the app on a virtualized 32-bit XP

    Read the article

  • help building a PC that can image a dozen hard drives simultaneously

    - by Bigbio2002
    Not sure if this belongs on here or SuperUser, but here goes... I'm trying to figure out how to make a mass hard drive imaging PC out of COTS parts. A dedicated imaging device can do 10 drives at a time, but costs several thousand dollars. So far, I'm thinking to use several 3-port PCI-E Firewire cards, and use some kind of Firewire-to-IDE adapter to connect the drives themselves. The "software" would consist of scripting diskpart, or some other imaging utility. The problem is that I can't seem to find any sort of adapter. I could use standard external hard drive bays, but then I'd have a dozen power cables that I need to plug in. Ugly, messy, and inefficient. I picked Firewire over USB not only for better transfer speeds, but also because FW can deliver power over the bus (and could theoretically power a hard drive). Does anyone have any input on this?

    Read the article

  • Hard Drive missing drive space

    - by Chance Robertson
    I have a 500 GB hard drive which I previously attached to my Mac. I detached the drive without going through the eject procedure. When I did this a message showed up, which of course I did not read. I could not use the drive until I formatted again. Now, when I attach the drive it says it is formatted NTFS and has 280.39 of 500 GB free. When I open the drive in Windows Explorer, Finder, or in Linux, is only shows a handful of files totaling 54 MB. How can I find out what is taking up all the space.

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >