Search Results

Search found 8937 results on 358 pages for 'disk defragmenting'.

Page 309/358 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Failing Windows Updates with Error Code 800719e4

    - by Kev
    On a number of Vista machines I have now come across the same error - when installing updates everything works fine, until it after it reboots and the rolls back during step 3. On all occasions (where a simple retry hasn't worked) the error code has been 800719e4. On my own laptop I have so far tried the following:- Installed the updates one by one manually - I started on the smallest and one by one have worked towards the largest one which has left me with "Security Update for Windows (KB2286198)" that refuses to install. Renamed all the files in "C:\Windows\Logs\CBS" to "xxx.old" where xxx was the original name with windows update turned off - no change Renamed all the folders in "C:\Windows\SoftwareDistribution" in the same manner - no change Attempted to install it manually "Windows6.0-KB2286198-x86.msu" - no change Tried to un-install IE8 - doesn't work, rolls back at the end (Installing the IE9 Beta when it launched was what alerted me to the issue on this laptop) Ran a "Fix It" thing from the Microsoft Website - no help (Can't find the link now). Tried to recover from the disk - but alas my laptop only has a recovery partition (and was unservice packed original). Ran with nothing running on startup, and only MS services - again no change. Google is being useless with a load of posts trying to get me to call a telephone number with letters in (presumably an American number) The error code appears to mean error log full but no one has any idea which log! The WinUpdate log does indicate the following is the error point though :- 2010-10-23 13:54:48:230 1240 738 Handler WARNING: Got extended error: "POQ Operation SetKeyValue OperationData \Registry\machine\Schema\wcm://Microsoft-Windows-shell32?version=6.0.6002.18287&language=neutral&processorArchitecture=x86&publicKeyToken=31bf3856ad364e35&versionScope=nonSxS&scope=allUsers\metadata\elements\HKEY_CLASSES_ROOT_lnkfile_shellex_DropHandler_defaultValue, @default, , ewAwADAAMAAyADEANAAwADEALQAwADAAMAAwAC0AMAAwADAAMAAtAEMAMAAwADAALQAwADAAMAAwADAAMAAwADAAMAAwADQANgB9AAAA" Has anyone any idea how to fix this once and for all - reinstalling laptop after laptop from scratch is mildly annoying at work where Office and Firefox are the only extras, but even more annoying at home - I don't fancy going through the palaver of reinstalling everything yet again.

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • Dell XPS 15 L502X hard drive Partition

    - by Mohan Gajula
    I have a situation here. I got my new Dell XPS 15 Laptop. The configuration of hard drive is as below : Volume 1: (OEM Partition): 133MB Volume 2: OS (C:): 685.25 GB Volume 3: Recovery : 13.25 GB Now, I am trying to re-partition my C Drive to have a C: drive with 100 GB and a new drive with 585 GB. Earlier, I tried using the Windows 7 Disk Management to shrink and extend the volume. That lead to the OS and hard drive not working. Dell Tech support tried to fix the issue, but they were not able to fix the issue online. Later a Dell Technician arrived my place, and replaced the hard drive with a new hard drive. Please help me re-partition the C: Drive with 100 GB, and new D drive with 585 GB. I don't want to lose my Recovery Partition. SOLUTION As Suggested by KCotreau below , I have done exactly. I have resized the C drive to 100 GB. And then applied the changes. Windows got restarted. On the boot screen, the partition was taking place. It took around 30 mins ( approx. ). Once after restart, I can see my C drive is 100 GB. Now opened the Easeus again. And created a new partition for the free space ( 585 GB ) this took 10 seconds to create. Here goes the screenshot after partitioning. Thanks to KCotreau. You are amazing.

    Read the article

  • The USB mouse sticks in Windows 7 after automatic attempt to fix the boot

    - by chelder
    Avast Antivirus asked me to delete a probable virus and to restart to perform a checking. I had to stop the checking at the middle of the procedure as I needed the computer. It was imposible to turn off the computer pushing the power button (as it entered in suspension mode, no matter how long I kept pushed the power button). I removed the battery as the only way to restart the computer. Windows 7 said that there is a problem to iniziate Windows. Windows 7 tried to fix the problem without success. Windows 7 started after that though. Everything is OK but the USB mouse. The USB mouse sticks and freezes each couple of seconds more or less. The tactile mouse (PS2) works well. I googled for solutions but the posible solutions didn't work for me. What happened? How could I fix it without formatting and reinstalling everything? UPDATE: this is what I did: Change the mouse from one usb port to another Test another mouse Set the number of cores of the CPU manually with msconfig Power management: not disable usb ports Check hard disk errors

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here? EDIT: It seems I can't even put ext3 or ext4 on it, they both complain about a corrupted journal. Gheh, guess something is really broken here.

    Read the article

  • How do I create an MBR on a USB stick using DD command line tool

    - by Lana Miller
    Okay I'm trying to create a BOOTABLE Windows7 image on a USB key from a Mac running Lion. My image is .iso format. I tried: sudo dd if=/Users/myusername/Win7.iso of=/dev/disk1 bs=1m And this succeeded in writing the files, except in DISK UTILITY on the mac, it shows the partition type as GUID Partition Table and not 'Master Boor Record'. Booting the key on my Vista computer yields the error "No boot sector on USB Device' From what I can tell, bs=1m in the DD command should have left 1 Megabyte for the boot sector, but for some reason this area of the USB Key is not set up correctly so that it will boot How can I fix this, or correctly use dd to write a bootable cd image such that it is now a bootable usb drive? Note: in the instructions I read about, they recommended renaming my Win7.iso to Win7.dmg before using DD, which made absolutely no sense to me, so I didn't do it. I could try with that step now, but it takes 1.99 hours to write the image to the USB drive so there is a huge penalty to trial and error here. Thank you.

    Read the article

  • Cannot remove storage account because of lease, but I already deleted the server [closed]

    - by djechelon
    I recently created a temporary virtual server on Azure. Then I deleted it. I wanted to delete the storage account associated with it because I didn't need it any more. The problem is that the VHD file is still associated to a non-existing virtual machine!! If I try to delete the VHD from Virtual Machines\Disks I get the Delete button greyed and the table tells me it's still associated with the old VM. If I go to storage administration and try to delete the blob from vhds/ directory I get there is an active lease. I've read on Azure forums that, in these case, one should try to force releasing the lease from the blob. I followed their instructions and downloaded their script, but running it failed. The script detected that the disk is associated to a Virtual Machine and can't be deleted. The problem is that I'm 1000000% sure that I already deleted the VM. In fact, I currently only have a single VM that has its own HD and is up and running fine! What can I do to delete that storage account that is probably sucking money from my pocket?

    Read the article

  • Get Illegal Instruction error when booting Linux in VirtualBox, works fine when booted directly

    - by rkjnsn
    I have a computer on which I am dual booting Windows 7 and Gentoo Linux (both 64-bit). I want to be able to load up my Linux installation in a VM while I am booted into Windows. I have installed VirtualBox and followed the instructions for creating a raw disk VMDK. When I start the VM, Linux starts booting, but then fails with the following error when unlocking my root partition: truecrypt[441] trap invalid opcode ip:373615538e0 sp:3dd0e0dfb60 error:0 in libpixman-1.so.0[373614d6000+8d000] Everything works fine when I boot into Linux directly. What could cause an illegal instruction to be hit in libpixman only when booting in VirtualBox? Update: As a troubleshooting step, I recompiled pixman without "-march", and no longer get an illegal instruction error in that library. (The boot fails in the same spot with the same error in a different library, however.) How can I determine the specific opcode that isn't working in VirtualBox so I can disable it in my CFLAGS without having to disable all CPU-specific optimizations? I am still confused as to why there would be any user-mode instruction that would fail to work in a VM. Is this a known limitation? My CPU is an Intel Core i7 3720QM, and I have hardware virtualization support enabled.

    Read the article

  • How to set up RAID-0 first time on new PC?

    - by jasondavis
    I have built basic PC's in the past but have never used a RAID array at all. SO now I am buying parts to build my new PC, it will be an intel i7 processor. My motherboard will have RAID support which I will use instead of an aftermarket raid controller for now. Also I plan to use 2 SSD drives in RAID-0 for my windows 7 OS. (Please note that I am aware of the issues with doing this, including lack of TRIM support when using RAID with SSD drives. I am OK with it not working as I can just re[place the drives in a year or so or wheneer they become more sluggish). SO here is my question part. If I assemble the motherboard, PSU, processor, RAM, vidm card, etc and then go to turn the PC on, it will have the 2 SSD drives hooked up. so I assume I will then soon the BIOS screen before I install windows? How to I go about making the 2 drives work in RAID-0 at this point? I do the raid part before installing my OS right? Please help with the steps involved from assembling the parts of the PC and then turning it on, to the part of getting the RAID-0 set up between the 2 drives and then installing my windows 7 OS from a Optical drive? Please help, all advice, instructions, tips appreciated as long as on topic. I do not need to be told that this is a bad idea as far as if 1 drive fails I losse it all, I plan on having a disk IMAGE to be able to restore my OS and software to a new set of drives at anytime needed in the event of drive failure. Same goes for lack of TRIM support. Thanks for reading and help =)

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • running chkdsk /F on a large mounted NTFS image file gets BSOD (Windows Vista)

    - by Citizentools
    Using ddrescue, I've created ISO files from the C: and D: drives on my Windows XP laptop's harddisk (after the laptop stopped booting and chkdsk etc. wouldn't fix it). I was able to mount the 60 GB D.iso file use OSFmount, and successfully recreated the D: drive on another laptop. The C.iso image is more problematic. ddrescue left about 3mb unrecovered of 85 GB total, after multiple passes (no big worries about this) and I'm able to mount it with OSFmount on a Windows Vista laptop. However, when I run chkdsk /F /V on the mounted drive (which was mounted as H:), I consistently get a blue screen (BSOD). CHKDSK makes it through the first three passes, including index fixing and security descriptor fixes, without errors, but triggers a BSOD when it attempts to fix the volume records or bitmap If I attempt to fix the drive by clicking on Properties-Tools-Error checking-Check Now-Automatically fix file system errors, I get an alert box reading "WIndows was unable to complete the disk check." I'd try a tool other than OSFMount, but it's the only thing I've found so far that will mount large ISO files, and it has worked for me up to now in this process. [Update 2011-11-13 18:41 EST] Just ran the same process using the original Windows XP laptop, with a different internal drive, and chkdsk worked like a champ. So the question is still interesting, but decidedly less urgent.

    Read the article

  • Asus K55VM usb 3.0 issue

    - by user2141481
    Good day superusers! I own the above laptop and I found out that there are some unknown and unusual issues with usb 3.0 ports. I haven't noticed anything strange until now. I got a new toshiba usb 3.0 external hdd and when I try to copy larger amount of data from my disk to the external hdd, the OS(windows 7) randomly starts ignoring the external hdd. It doesn't shut it down, it kinda just stops responding but the light on the hdd is still lit. I get an error that the files cannot be copied. I have reinstalled windows 7, installed all drivers(including intel chipset drivers of course) and the issue is still present. It acts normal when copying small amount of data. Also, I heard that some intel chipsets have an issue with usb, something about the connectors not transferring power when the usb device enters some kind of "low power mode" causing the device to stop responding and you need to plug it out and in again. But the thing is, my Intel® Chief River Chipset HM76 is not on the list of affected hardware(not ENTIRELY sure though). If anyone has any idea of what the problem to this might be, I'd be greatful. Edit: The hdd works perfectly fine even for large amounts of data if plugged in the usb 2.0 port!

    Read the article

  • How is made sure magnetic or electric fields from devices like transformers or fans close nearby do

    - by matnagel
    Fans and transformers which are inside the server case create magnetic and electric fields. Electric fields can be easily shielded, but what about magnetic fields, they can only be shielded with high cost materials like mu metal http://en.wikipedia.org/wiki/Mu-metal If a hard drive is installed too close to an intense transformer field, how is the magnetically stored information on the ferromagnetic surfaces of the disk kept safe? Even if drives are shielded, where are the limits? Is there some technical investigation or recommendation from manufacturers about this? (I never heard about something and never had any problem but I am interested in some facts. This is much preferred over what you believe or a habit you developed. Please try to give some solid infromation.) I have built and repaired many servers and sometimes I did put the harddrive on top of the power supply. Edit: This question is not about frequencies that could affect the drive via the power or data connectors of the drive, those are electronically decoupled and that's another question. Edit 2: The wikipedia page states that the motor inside the drive is shielded with mu metal. It is obvious that manufactureres have to take care of this. This question is about such influences from outside the drive.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • How to figure out what VirtualBox did?

    - by AndrejaKo
    I'm trying to boot a custom made-in-ASM OS on my recent laptop. The OS is intended to be installed on a floppy and during make creates a bootable floppy. Since I don't have a floppy drive, I installed it on a virtual floppy. After that I used WinToFlash's create bootable MS-DOS USB drive option to transfer the floppy image to an USB flash drive. Then I tried to boot my computer from it but got only a repeating broken string on screen. After all that I made a virtual hard disk image form the flash drive using this tutorial and tried to boot a virtual machine from it. First time I got same problem as on real computer. I then used the reset option and next time and every time after that OS booted correctly. My question is: How do I figure out what exactly happened to the virtual machine between first and second boot? UPDATE I just created a new VM with default settings for windows XP and it has the same problem that I have on a real computer. I was unable to reproduce the procedure which made the first VM work correctly.

    Read the article

  • Is there a way to exclude a specific drive vdi from "snapshots" in VirtualBox?

    - by Graza
    ...or is there another space-efficient way of dealing with the page/swap file of the Guest O/S? I've realised that its quite possible/likely that one of the things which "bloats" the snapshot/diff vdi's when a snapshot is taken is the guest operating system's pagefile. For example, say I have a 2Gb swap-file in a Windows guest OS, and over the course of a few weeks the usage of the swap file has gone over 1Gb a couple of times. When I next create a snapshot, it seems likely that I'd be almost guaranteed around 1Gb of space taken up in the new differencing disk just because of changes in the swap file. Obviously (provided I never did "live" snapshots on running or paused machines, and only ever did them when the machine was shut down), I would not need any of the information in the swap file to be saved. So this would simply be a waste of 1Gb. I'm wondering if there's a way to attach a vdi to a VM and flag it as "exclude from snapshots" - which would mean I could put the swap file on a different vdi which would never be included in a snapshot. Or if anyone has any other suggestions. Or an explanation about why it might not be an issue. I could obviously delete and recreate a swap drive vdi every time I did a snapshot to achieve the same effect, but this is a little more effort than simply clicking "create snapshot"....

    Read the article

  • Ubuntu won't boot from USB memory stick

    - by mackenir
    I used the instructions on this webpage to create a bootable USB drive for running Ubuntu 9.10. Unfortunately it doesn't work on my EeePC. Even with 'Removable Dev.' selected in the BIOS as the first boot device, the PC just boots into Windows 7. How do I troubleshoot this problem? The drive is readable and looks like this: Directory of E:\ 28/10/2009 21:14 <DIR> .disk 28/10/2009 21:14 222 README.diskdefines 28/10/2009 21:14 143 autorun.inf 28/10/2009 21:14 <DIR> casper 28/10/2009 21:14 <DIR> dists 28/10/2009 21:14 <DIR> install 28/10/2009 21:14 <DIR> syslinux 28/10/2009 21:14 4,098 md5sum.txt 28/10/2009 21:14 <DIR> pics 28/10/2009 21:14 <DIR> pool 28/10/2009 21:14 <DIR> preseed 28/10/2009 21:14 0 ubuntu 26/10/2009 16:16 1,468,640 wubi.exe 25/02/2010 00:28 2,147,483,648 casper-rw 8 Dir(s) 5,290,307,584 bytes free

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • Command line scripts to restore the 4 system databases of MS SQL Server 2008

    - by ciscokid
    Hi there, can someone give me some advice on how to restore the 4 system databases (master, msdb, model, tempdb) of a sql server 2008 please? I've already done some testing myself (on restoring the master database) with the following commad line script as a result: ::set variables set dbname=master set dbdirectory=C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA title Restoring %dbname% database net stop mssqlserver cd C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn sqlservr -m sqlcmd -Slocalhost -E -Q "restore database master from disk='c:\master.bak' WITH REPLACE" net start mssqlserver pause After the execution of the 'sqlservr -m' command (used to start the server instance in single-user mode, which is only necessary when restoring the MASTER database), the script stops. So in order to execute the last 2 commands I need to separate the script into 2 smaller scripts, and run them one after the other. Does anyone has an idea on how I can merge them into one single script that runs completely without any interruption? I also want to restore the other 3 system databases using command line scripts like this one. Can someone please advice me how I need to go on? I've already noticed that restoring the temdb is not so easy, but there has to be a way... Looking forward to your advice!

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • Can I use a Drobo FS over the internet? -Or alternative?

    - by SeniorShizzle
    I have a lot of files. A HUGE Aperture library, lots of design work from Photoshop, and a rather large iTunes library too. I really want to get a Drobo FS, the networked one, to store all of my stuff on, so that I can get to it from my MacBook Air (which obviously with its minuscule 64GB drive can't hold my Aperture library by itself) and my iMac which is my main powerhouse. The dealbreaker for me is that I NEED to be able to access my Aperture library, and especially my iTunes library from across the internet. I understand that it will probably be slow and everything, but there's nothing else I can do besides hauling around a huge hard drive with me. So, is there any way I can somehow share my Drobo across the internet, on a VPN or something? My other alternative is to upload everything to my web host, FatCow, which offers unlimited disk space (something I hope to make them regret) and then access it using Expandrive. My only thoughts with this are that with the Drobo, any work that I do locally will be much snappier than if I have to work everything off the cloud. Any suggestions about alternatives would also be welcome.

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • Free software for backing up an attached network drive

    - by Richard
    My wireless router comes with a USB connector which allows me to plug an external hard drive in and it'll act as a Network Attached Storage. The problem is that I want to backup this hard-drive to the external drive of another computer so that if the NAS drive fails, I don't lose everything. However, Windows 7 Backup refuses to include the NAS as a location to backup. I can't fool it by mapping it to a drive letter either. Google presents lots of pages on how to backup files to a NAS, but not the other way around. Can anyone advise me on free software which can do incremental backups of a NAS drive to an external drive attached the computer it is running on? I'm aware of this question but the top answers have one or more of the following issues: They aren't free. The free version cannot backup a NAS. They cannot do incremental backups. They're just a script and therefore have limited other functionality (eg. disk space management, scheduling, compression, etc.etc.)

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >