Search Results

Search found 8937 results on 358 pages for 'disk defragmenting'.

Page 176/358 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Using hdparm for better performance on Web Servers

    - by Rishav
    I just heard about using hdparams to optimize the Hard Disk Performance of a server ? Is this common practice ? What file systems do you use ? I generally deploy on the second last release of Ubuntu for stability reasons, do you some other filesystems or use distributed file systems from the get go ? Do the hdparam settings change for different File systems ? I haven't tried this yet, so how much difference do changes like this make ?

    Read the article

  • Problems installing iPhone final 3.2 SDK on OSX

    - by user34475
    I downloaded the latest iPhone SDK (xcode_3.2.2.2_and_iPhone_SDK_3.2_final.dmg) from Apple and uninstalled the old SDK before the install. I double-clicked the .dmg file and got the following pop-ups: The following Disk Images couldn't be found and xcode_3.2.2.2_and_iPhone_SDK_3.2_final.dmg is not recognized I am using OS 10.6.3 How do I solve this?

    Read the article

  • DVD drive doesn't recognize blank dvds only!!!

    - by Jack
    My dvd-rom works fine because I can play dvds on it, however when the dvd is a blank one the drive refuses to recognize it. So I can't burn dvds anymore because the drive shows up as empty in my disk burning software (dvd flick). Any idea what the problem is and how to solve it? PC: Windows vista home basic, 32 bit

    Read the article

  • How to create a snapshot volume to a remote server using kvm?

    - by Purres
    I want to backup a few virtual machines to a backup server. Here're the backup steps. suspend a virtual machine create a snapshot of the virtual machine using lvcreate -s resume a virtual machine dd if=/virtual_machine_path | lzop > /temp/backup.lzo rsync /temp/backup.lzo -e "ssh " 1.2.3.4:/backup_path/ However, the hypervisor server doesn't have enough hard disk space to create a snapshot in step 2. Is there a way to create a logical volume snapshot to a remote server?

    Read the article

  • Windows 7 backup and restore: Is each backup incremental or complete?

    - by Margaret
    I have a computer that's been taking backups using Windows 7's Backup and Restore feature. However, I now need to reclaim hard disk space, and am trying to figure out what I can safely delete. When I go into the Backup and Restore options on the machine, it shows several backups. Is it safe to delete the older ones? Or is it an incremental backup, that means that files not changed since before the last backup would then be lost?

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • VMWare Server - Writing files to virtual hard drive performance

    - by Ardman
    We have just moved our infrastructure from physical servers to virtual machines. Everything is running great and we are happy with the result of the move. We have identified one problem, and that is reading/writing performance. We have an application that compiles files and writes to disk. This is considerably slower on the new virtual machines compared to the physical machines. Is there a performance bottleneck when writing to a virtual hard drive compared to a physical hard drive?

    Read the article

  • Directly reading a LTO tape drive

    - by John
    On our server (M$ 2003) is it possible to directly read our LTO 4 tape drive and copy the entire ntbackup created bkf file on it to an external hard disk? (Is the tape backup even stored on a tape as a bkf file, I’m going off when we only used external usb HD’s.)

    Read the article

  • How do I use Group Policy on a domain to delete Temporary Internet Files?

    - by Muhammad Ali
    I have a domain controller running on Windows 2008 Server R2 and users login to application servers on which Windows 2003 Server SP2 is installed. I have applied a Group Policy to clean temporary internet files on exit i.e to delete all temporary internet files when users close the browser. But the group policy doesn't seem to work as user profile size keeps on increasing and the major space is occupied by temporary internet files therefore increasing the disk usage. How can i enforce automatic deletion of temporary internet files?

    Read the article

  • HVM virtualization with PV drivers on XenServer

    - by Nathan
    Is it possible to create an HVM guest in XenServer 5.5 that uses PV drivers for disk and network without being fully paravirutalized? This should give me decent performance from the VM without having to jump through hoops to create a PV guest when a pre-built template doesn't exist. Since PV drivers exist for Windows, and XenServer provides templates for windows that use HVM virtualization this must be possible, I just don't see how to configure this myself.

    Read the article

  • Windows SteadyState - system's security log is full

    - by Matt
    Quick version: New computer, attached to Windows domain, with SteadyState w/ Disk Protection turned on, cannot log on as domain user because Windows states 'system security log is full' Troubleshooting performed: disabled all 'restrictions' listed in SteadyState, cleared system security log, changed security log settings to overwrite entries when it becomes full, restarted computer to commit changes, verified changes were commited - still cannot log on as domain user, changed Documents and Settings folder to another partition, still cannot log on as domain user Let me know if you need a more detailed description of any steps performed. I appreciate any help you can give me.

    Read the article

  • EFS recovery given everything but the Registry

    - by Joel in Gö
    I have an unfortunate problem: my old Win Xp installation has died, probably due to the hard drive failing. The drive now fails all SMART tests, but I can get files off it OK. I have now installed Windows 7 on a new drive, and want to transfer files from the old drive. However, some sensitive files were in an encrypted folder (I think EFS?). How can I un-encrypt them, given that I have essentially my entire old XP installation on disk? Thanks!

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Clonezilla restore from Samba - no 'restoredisk' option

    - by MT_Head
    I used a CloneZilla LiveCD to back up a couple of Windows machines to a Samba share. Now I'm trying to restore those images, and CloneZilla won't even give me the 'restoredisk' or 'restorepart' options on the menu. I'm guessing that this is because CZ isn't recognizing a valid image... but why? Here's a listing of the folder on the Samba share: -rwxrwxrwx 1 marc users 319 May 31 03:45 blkdev.list -rwxrwxrwx 1 marc users 5307 May 31 04:41 clonezilla-img -rwxrwxrwx 1 marc users 4 May 31 04:31 disk -rwxrwxrwx 1 marc users 16091 May 31 04:31 Info-dmi.txt -rwxrwxrwx 1 marc users 11029 May 31 04:31 Info-lshw.txt -rwxrwxrwx 1 marc users 1502 May 31 04:31 Info-lspci.txt -rwxrwxrwx 1 marc users 170 May 31 04:31 Info-packages.txt -rwxrwxrwx 1 marc users 80 May 31 04:41 Info-saved-by-cmd.txt -rwxrwxrwx 1 marc users 10 May 31 04:31 parts -rwxrwxrwx 1 marc users 2097152000 May 31 04:06 sda1.ntfs-ptcl-img.gz.aa -rwxrwxrwx 1 marc users 247361656 May 31 04:08 sda1.ntfs-ptcl-img.gz.ab -rwxrwxrwx 1 marc users 823182034 May 31 04:31 sda2.ntfs-ptcl-img.gz.aa -rwxrwxrwx 1 marc users 36 May 31 03:45 sda-chs.sf -rwxrwxrwx 1 marc users 31744 May 31 03:45 sda-hidden-data-after-mbr -rwxrwxrwx 1 marc users 512 May 31 03:45 sda-mbr -rwxrwxrwx 1 marc users 315 May 31 03:45 sda-pt.parted -rwxrwxrwx 1 marc users 285 May 31 03:45 sda-pt.parted.compact -rwxrwxrwx 1 marc users 259 May 31 03:45 sda-pt.sf (I've been experimenting with various permissions trying to get this to work; that's why they're currently all "rwxrwxrwx"...) I've got my CZ LiveCD stuck in a (different) machine with a 160GB SATA disk that I'm fine with overwriting; although CZ doesn't show a directory listing, it does show that the correct folder is mounted as /home/partimag. But a moment later, after selecting either Beginner or Expert, I'm only presented with the "savedisk", "saveparts", and "exit" options. What am I doing wrong? I am confident that the initial backup was successful; I can post the log if desired, or any other information that might be germane. Edit: I've copied the contents of the folder onto a 16GB USB stick and set THAT as /home/partimag. Still nothing. What the hell is CZ looking for?

    Read the article

  • MSMQ Resilience

    - by Paddy Carroll
    I have a requirement for a resilient MSMQ setup on VMWare ESX5. I am aware that we cannot allow the queue storage to be shared as it must be installed on physical disk mount, e.g. it cant be an CIFS or DFS Share. The following constraints apply: We don't use windows clustering We dont't rely on hot standbys Is there a way I can replicate the queue storage to another platform so that it can assume MSMQ duties on failure of the primary platforms using any method including queue forwarding?

    Read the article

  • How to backup millions of small files?

    - by grassbl8d
    What is the best way to backup millions of small files in a very small time period? We have less than 5 hours to backup a file system which contains around 60 million files which are mostly small files. We have tried several solutions such as richcopy, 7z, rsync and all of them seems to have a hard time. We are looking for the most optimal way... We are open to putting the file in an archive first or transferring the file to another location via network or hard disk transfer thanks

    Read the article

  • Will Windows 7 work at all on my old toshiba [closed]

    - by andrew
    Windows 7 requires the following specifications: 1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit) 16 GB available hard disk space (32-bit) or 20 GB (64-bit) DirectX 9 graphics device with WDDM 1.0 or higher driver Will it work at all on my old toshiba Satellite A100 PSAA8C-SK400E Intel® Core™ Solo processor T1350 (1.86GHz, 533MHz FSB, L1 Cache 32KB/32KB, L2 Cache 2MB) Standard Memory: 2x512 MB DDR2 Intel® Graphics Media Accelerator 950 with 8MB-128MB. The main problem I can see is that the graphics is not up to it.

    Read the article

  • Free space on Dedi' in CentOS

    - by Trance84
    It will sound stupid but i need to figure out how much disk space i have in my dedicated server, it runs CentOS6...the last command i issued was this [root@ks34900 ~]# df -h Filesystem Size Used Avail Use% Mounted on rootfs 9.7G 6.4G 2.9G 69% / /dev/root 9.7G 6.4G 2.9G 69% / none 1000M 288K 1000M 1% /dev /dev/sda2 914G 200M 868G 1% /home But again, stupid as it may sound... i cant figure out how much space i have in "/" folder (root) And is it possible that "/usr" have a different space (partition)?

    Read the article

  • du excluding hard links possible?

    - by balor123
    I'm trying to determine how big a cloned Git repository is from a local file system. It creates hard links for some but not all files. How can I determine the disk usage of it? The best I can come up with is "du -a" right now with the original and again with the clone to determine the difference, since each hard linked file will be counted only once. Ideally, I would just run du on the clone and count each hard linked file zero times.

    Read the article

  • How to recover the ubuntu system?

    - by Hoang
    I istalled the ubuntu virtual machine on vmware. However, one time the disk was full, the system was installing some updates, it quit without giving any message. Now the system is crashed, I can not even launch firefox to download data. How can I recover this virtual machine to a previous state?

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >