Search Results

Search found 8937 results on 358 pages for 'disk defragmenting'.

Page 59/358 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • write-through RAM disk, or massive caching of file system?

    - by Will
    I have a program that is very heavily hitting the file system, reading and writing randomly to a set of working files. The files total several gigabytes in size, but I can spare the RAM to keep them all mostly in memory. The machines this program runs on are typically Ubuntu Linux boxes. Is there a way to configure the file system to have a very very large cache, and even to cache writes so they hit the disk later? I understand the issues with power loss or such, and am prepared to accept that. Crashing aside, in normal operation the writes should eventually reach the disk! Or is there a way to create a RAM disk that writes-through to real disk?

    Read the article

  • Ghost Image - windows asks for activation on when deployed to VM

    - by Chris Sobolewski
    I have several images created with Ghost Solution Suite (v11 I believe), the images have been in use for a few years now, but I am finally to the point where I have enough time to attempt to virtualize them for easier updates. I am running VMWare and attempting to image the virtual machines with my ghost image files. For my images I am running sysprep with minisetup and using reseal. The image deploys successfully, however when I start the VM for the first time, it demands windows activation. This doesn't happen when I image a physical computer, even a different model with different hardware. The idea of virtualizing my images becomes rather worthless if I am unable to deploy the images without having to activate every time (especially as Microsoft keeps declaring our volume licence key as invalid for activations). Does anyone know why it is asking for activation on a virtual machine, but not a physical PC? How can I prevent this?

    Read the article

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Identifying Hard Drive as performance bottleneck for desktop machines

    - by Programming Hero
    I'm working in a development team where we all use laptops so we can work in multiple locations. These laptops are proving notoriously slow for development work, but at a glance they all look to have the specification for a much faster experience: CPU - Intel Core 2 Duo T7500 Memory - 2GB of RAM We all experience the biggest delays when the hard-drives are being accessed, particularly when swap-files are being thrashed. After doing a little bit of profile, a colleague discovered that our HDDs are seeing Read/Write speeds of about 10MB/sec. This seems abnormally low and we believe it the cause of the problem. Sensibly (though somewhat annoyingly) our business wont blow money on faster drives just to see if it fixes the problem; we need to illustrate this is definitely the problem and that buying some solid-state drives will make it go away. I need some way of showing how 90% of the system resources aren't being used over the course of a day, and that whenever there is utilization, it's all in HDD reads or writes. Are there any tools I could use to provide this information? Does it seem likely the problem is going to be fixed by a faster drive? Should I be looking for alternatives?

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • Simple Workstation Imaging Solution?

    - by Will
    Hey guys, I need a fairly cheap imaging solution for Windows XP corporate desktops. Ideally, I'd be able to set up a desktop exactly as we want it, create an image, deploy this image to a server, then boot a new desktop to a CD/USB Drive/Network and quickly set up the workstation. Ideally, each computer would also have a unique workstation name. Any ideas? Right now I'm using a custom built Linux DD solution, but it's slow, not network-based, can't image multiple computers at the same time as there's only one copy on a USB drive, and can't uniquely name the computers. Thanks, Will

    Read the article

  • Can't access USB drive anymore

    - by marie
    I have a 32 GB Lacie Cookey USB flash disk that doesn't show in the Computer window but it's visible as a device. cmd > diskpart DISKPART> list disk Disk ### Status Size -------- ------------- ------ Disk 0 Online 149 G Disk 1 No Media 0 DISKPART> select disk 1 Disk 1 is now the selected disk. DISKPART> clean Virtual Disk Service error: There is no media in the device. It also appears in the Disk Management tool, but the box is empty. Is there anything I can do or is it dead? ............................................................ output from ChipGenius: Description: [F:]USB Mass Storage Device(LaCie CooKey) Device Type: Mass Storage Device Protocal Version: USB 2.00 Current Speed: High Speed Max Current: 200mA USB Device ID: VID = 059F PID = 103B Serial Number: 070535924B170C18 Device Vendor: LaCie Device Name: CooKey Device Revision: 0100 Manufacturer: LaCie Product Model: CooKey Product Revision: PMAP Controller Vendor: Phison Controller Part-Number: PS2251-67(PS2267) - F/W 06.08.53 [2012-09-26] Flash ID code: 983AA892 - Toshiba [TLC] Tools on web: http://dl.mydigit.net/special/up/phison.html

    Read the article

  • Missing HDD space - says 65GB used, selecting all folders shows 30GB used

    - by Igor K
    Hi Running Windows Server 2008, 74GB raptor drive and noticed we only had about 500MB left - yikes! So deleted some old backups we don't need, but can't track down where about 30GB seems to be taken up. If I go to C: and select all folders and go to properties, this comes to around 30GB but in My Computer I can see 65GB is used. How can I find out whats eating the space? Just IIS + MSSQL Express + Smartermail on the server

    Read the article

  • How to re-do the hard disks in a WD Word Book Edition II ?

    - by jfmessier
    I recently purchased a WD World Book II, a 2 TB one. I call it the "White Box". It has those 2 1TB drives, and they were in this RAID 1 config, only giving me about 1 TB. I could not delete the raid array, and I took the drives in a Linux box. But I also deleted the entire partitions of the disks, and I cannot even et the existing RAID array on this WD White Box. The drives are fine, but I cannot get them to work on the WD White Box. My goal was to get back to a real 2 TB storage space. If I cannot get those drives back in the White Box, I can re-use them elsewhere, but this would mean a waste of the firmware and network connection. After the fact, I read that, anyway, the network performance is rather poor. Thanks :-)

    Read the article

  • USB 3.0 hard disk not detected on a particular host controller?

    - by Alvin Wong
    I have a USB 3.0 hard disk which has always been working on my desktop with an XHCI. Now I just bought a notebook with an XHCI (something with Intel's Ivy Bridge setup). The first time I plug the hard disk in its 3.0 port it is detected and working. A few hours later I try to connect it again, but seems that the notebook just ignored it! The light on the hard disk didn't blink as usual (instead it is hold at on). I then tested it with my desktop again and it is working perfectly. It gets trickier when I plug it in the USB 2.0 port of that notebook it is detected and working perfectly (despite the slower speed). Then I try to plug in an USB 2.0 USB flash drive to that USB 3.0 port, and it is detected (of course as USB 2.0). So, there are two USB 3.0 ports on my notebook's XHCI. Both of them are not working with my hard disk but working perfectly fine with my USB 2.0 UFD. What's wrong with it? When I plug in the hard disk, device manager doesn't change. I've tried re-installing the driver for the XHCI, but it changes nothing. Had I broke the USB 3.0-specific pins of both USB 3.0 ports?

    Read the article

  • How can I keep a file in Windows 7's cache?

    - by netvope
    Sometimes you know better than Windows what files will be re-used later. Suppose you have 8GB of memory, and you use the same 1GB file every hour in an I/O-bound application (which takes 1 second to finish if the file is cached, and 1 minute if not.) Now you process some other 16GB of data that are not going to be re-used. Naturally the frequently used 1GB file will be pushed out of the cache. It would be beneficial if one can tell Windows to keep that 1GB file in memory. (Better yet, it would be great if I can tell Windows not to cache those 16GB of data, but I'm not optimistic that this can be done.) The poor-man's way to keep a file in the cache would be to keep reading the file. Are there any better ways? Are you aware of any programs that do this? (If this can be easily done under Linux, please let me know too.)

    Read the article

  • Conflicting answers from du with different units

    - by dpitch40
    My question is quite simple. I get this output when checking the total amount of space I'm using on my Walkman. david@Milton:/media$ du -b --max-depth=0 WALKMAN/ 14823290693 WALKMAN/ david@Milton:/media$ du -k --max-depth=0 WALKMAN/ 14523776 WALKMAN/ Last I checked, 14,523,776 KB * 1024 = 14,872,646,624 B, not 14,823,290,693. Dividing the two, their "K" unit seems to be equal to about 1020.62 rather than 1024 as advertised. This is causing some errors in the program I wrote to sync my Walkman, so it fills up faster than it claims to. Can anyone explain this discrepency?

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • Looking to replace Ghost with FSArchiver or Clonezilla, few questions about capabilities

    - by Daniel Wright
    I work for a PC Repair company and we are looking into setting up a dedicated machine with externally accessible SATA bays to clone harddrives as a safety net incase something goes wrong during a repair. We currently use a SATA/PATA to USB bridge called MagicBridge and Norton Ghost on any workstation, but we're looking to move away from Ghost. We have a computer with a large RAID5 array with Windows Server 2008 Standard currently installed, but this can be replaced with a flavour of *nix. I have some experience with Clonezilla, but FSArchiver also seems like a suitable replacment too. My Head Technician wants to know if my chosen solution (probably Clonezilla or FSArchiver, but I'm open to free suggestions) is capable of: Cloning a degraded RAID, such as a single drive from a RAID1 mirror without complaining Producing images that are easily mountable (he'd prefer them to be mountable in Windows, but if there is no other easy way, *nix should be fine) akin to Ghost Explorer so individual files can be restored as well as being able to do bare metal restores. My apologies for wordiness but I wanted to be thorough in my explaination. Thanks for any suggestions or tips :) EDIT: I've just found out that Clonezilla has a workaround for cloning RADI1 drives EDIT2: Found the answer to both of my questions, aparently I wasn't phrasing my searches right, could this question be deleted please?

    Read the article

  • Calculate minimum ext3 partition size for certain amount of data

    - by Daniel Beck
    These following ext3 partitions contain identical data. As we can see, the larger the partition size, the more space is required for the same files: Filesystem 1K-blocks Used Available Use% Mounted on /dev/loop11 3965777 561064 3199964 15% [...] /dev/loop19 573029 543843 29186 95% [...] Filesystem Size Used Avail Use% Mounted on /dev/loop11 3.8G 548M 3.1G 15% [...] /dev/loop19 560M 532M 29M 95% [...] Filesystem Inodes IUsed IFree IUse% Mounted on /dev/loop11 1024000 1656 1022344 1% [...] /dev/loop19 1024000 1656 1022344 1% [...] I start with a partition of fixed size that possibly wasted a lot of space and I want to create a partition that is able to hold that data but with (almost) minimal size. How can I reliably calculate that minimal partition size needed for storing a certain amount of data? The amount of data changes over time, and I need to automate these calculations.

    Read the article

  • Install Chromium OS to SECOND internal drive on EEE 901?

    - by Andrew Swift
    I am trying to install the Chromium OS on an EEE PC 901, and I have succeeded in using Image Writer for Windows 0.2r23 to copy the IMG file to an SDHC card. Since the OS speed is limited by slow card access, I'd like to install the Chromium OS on the second, unused, internal SSD Drive, D:. However, Image Writer doesn't allow me to restore an internal drive from an IMG file. To be clear: I boot in XP on C: then run Image Writer to install the Chromium OS. Does anyone know how I can either convince Image Writer that D: is a removable drive or know of alternative program that will let me restore D: from an IMG file (non-windows file system)?

    Read the article

  • Using a non-validated SED on a Dell R720

    - by a coder
    We were given a Dell R720 a couple years ago, and the machine currently has standard 300GB 3.5" SAS 15k drives. Our RAID controller is a Perc H710. We need to update our disks to FIPS 140-2 certified SED. According to Dell, they have only one tested/validated FIPS SED for this machine/controller, but it is a 7200rpm 3.5" unit. I'm showing that Dell offers a 600GB 15k FIPS SED in 3.5" configuration (Dell part number 342-0605), but they say they haven't validated or tested to know if it works. They informed us that we would not void our warranty in using this non-validated drive. How likely is it that our R720 with H710 controller will work with the non-validated drive? Are there significant differences in how drive manufacturers build SED that would prevent them from working consistently across different controllers?

    Read the article

  • Troubleshooting: Monitor never turns on, system fans running, DVD-ROM does not open.

    - by Wesley
    Hi all, Here are my specs beforehand: ECS P4VXASD2+ (V5.0) motherboard FSB 533MHz Intel Pentium 4 2.40A GHz Prescott Socket 478 2x 256MB PC2100 DDR RAM, 2x 256MB PC133 SDRAM CoolMax 350W PSU DVD-ROM - will edit with brand & model 128MB ATi Radeon 9800 Pro AGP No hard drive So, I just put those parts together today and I tried to power it up, with the monitor connected to the Radeon 9800 in the AGP slot (mobo does not have VGA port). After turning it on, the CPU fan, graphics fan and system fan go on. However, the monitor remains in standby mode, despite being plugged in. Also, after pushing the button on the DVD-ROM drive, it does not open. I've used the DVD-ROM drive before with absolutely no issues. The graphics card was slightly buggy when I put it on another machine, which was left outside in winter weather for 3 months. (Still that computer's integrated graphics worked fine.) CMOS battery was replaced and jumpers are all set correctly. Now, I'm wondering whether the motherboard, CPU, PSU or GPU is the problem. What can I do to test which part is the problem? Just to clarify, I don't have a hard drive, so I usually boot Ubuntu from the disc drive. Anyways, thanks in advance!

    Read the article

  • Why does sharepoint claim not enougth disk space for backup when there is lots availalbe?

    - by Mr Shoubs
    I'm trying to run the following command: Backup-SPFarm -Directory E:\Backups -BackupMethod full -Verbose However it errors saying there isn't enough disk space... the backup will be about 1.8Gb in size, I have 27.52GB free, so why does it think I need 30Gb? VERBOSE: Leaving BeginProcessing Method of Backup-SPFarm. VERBOSE: Performing operation "Backup-SPFarm" on Target "SHAREPOINTSERV". Backup-SPFarm : There is not enough disk space. Free additional space on your h ard disk and then try again. Approximate amount of space needed: 30.12 GB. Amou nt of space free on disk: 27.52 GB. At E:\Backups\Script\BackupSharePointFarm.ps1:3 char:14 + Backup-SPFarm <<<< -Directory E:\Backups -BackupMethod full -Verbose + CategoryInfo : InvalidData: (Microsoft.Share...mdletBackupFarm: SPCmdletBackupFarm) [Backup-SPFarm], SPException + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletBackupFa rm VERBOSE: Leaving ProcessRecord Method of Backup-SPFarm. VERBOSE: Leaving EndProcessing Method of Backup-SPFarm.

    Read the article

  • Should I disable write caching on my Windows 2008 VM?

    - by javano
    I have a Windows Server 2008 x64 Standard virtual machine that runs on a machine with a hardware RAID controller, a Perc 6/i, which has a battery on-board. Doing everything I can for additional performance, I think I should disable this. Is this very dangerous though? My understand is that Battery Backed Write Caching gives a performance boost to the host OS, telling it the write was complete when they are still sitting in flash waiting to be written. However, I can't see how it would be detrimental to performance, but is there a gain (even if marginal) to enabling it / disabling it? P.s. There machine has a backup power. Here is a screen shot for clarification:

    Read the article

  • SQL Server Replication Agent priority

    - by Wikser
    Every hour a server replicates SQL server data with some external web server. During this time, which takes about 2-5minutes, the database seriously slows down. Colleagues, which work with the front end applications of that on another terminal server, even regularly start complaining. The databases are also synchroniously mirrored (via SQLServer mirroring, no replication) to a third server. Note that 99% of the data is replicated outgoing, so the server should rarely need to update its data. As the (merge and transactional) replication tasks are not time-critical, I would like to reduce their priority or somehow slow them down, so they don't affect the database performance that much. How would you implement that?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >