Search Results

Search found 6200 results on 248 pages for 'recovery partition'.

Page 41/248 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • How to clone or copy running windows 7 to child partition

    - by saad
    Is there anyway to clone partition to partition in windows 7 for free using some kind of command line tool so that i can set block size to increase speed i google and found some tools like dd for windows and dcfldd but when i use them it gives me error like access denied and permission denied i tried to login as administrator using: net user administrator on but its same problem dcfldd bs=4096 if=.\k: of=\.\m: while its working to create image file : dcfldd bs=4096 if=.\k: of=\.\M:\filename.ext some help needed on this will appreciate thanks

    Read the article

  • Unecrypted Image of Truecrypt-Encrypted System Partition

    - by Dexter
    The general tenor around the internet seems to be that you can't create images of system partitions that have been encrypted (with truecrypt) other than with dd or similar sector-by-sector copy tools. These files however are very impractical given their size (and are obviously incompressible) which makes keeping multiple states/backups of your system partition rather expensive (..especially considering current hdd prices). The problem is that backup tools (like Acronis True Image, Clonezilla, etc.) won't give you the option to create an image of (mounted/opened) Truecrypt partitions, or that there is no recovery environment for restoring the backup, that would allow to run truecrypt before doing any actual restoring. After some trial and error however, I believe I have found a very simple way. Since Truecrypt (running in Linux) creates a virtual block device, that it uses for mounting the unencrypted partitions into the file system, partclone can be used for creating/restoring images. What I did: boot up a linux live disk mount/open the drive/device/partition in truecrypt unmount the filesystem mount point again, like so: umount /media/truecryptX ("X" being the partition number assigend by truecrypt) use partclone (this is what clonezilla would do too, except that clonezilla only offers you to back up real drive partitions, not virtual block devices): partclone.ntfs -c -s /dev/mapper/truecryptX -o nameOfBackupFile for restoring steps 1-3 remain the same, and step 4 is partclone.ntfs -r -s nameOfBackupFile -o /dev/mapper/truecryptX A backup and test-restore of the system (with this method) seems to have worked fine (and the changed settings were reverted to the backup-state). The backup file is ~40 GB (and compressible down to <8GB with 7zip/LZMA2 on the "fast" setting). I can't quite believe that I'm the only one that wants to create images of encrypted drives, but doesn't want to waste 100GB on the backup of one single system state. So my question now is, given how simple this was, and that no one seems to mention anywhere that this is possible - did I miss something? or did I do something wrong? Is there any situation that I didn't think of where this method will fail? Obviously, the backup file needs to be stored in some other encrypted place in order to still remain confidential, since it is unencrypted. Also, in order to do a full "bare metal" restore, one would have to actually first (re-)install Windows, encrypt it, and only then restore the backup file. The funny thing however is that you won't need to backup any partition tables, etc. since the reinstall will effectively take care of that. Is there anything else? This is imho still a lot better than having sector-by-sector images..

    Read the article

  • Unable to delete all partitions on flash drive using Windows 7 OS??

    - by irrational John
    Recently I purchased an ADATA C802 8GB flash drive. Since the drive was new I decided to run some of the HD Tune Pro (v4.50) performance tests on it, mostly just for the heck of it. To avoid accidently destroying data HD Tune refuses to write to a drive unless there are no partitions on the drive. If you do attempt to write to a drive with partitions, it posts the message "Writing is disabled. To enable writing please remove all partitions." As you would expect, the ADATA came formatted with a single primary FAT32 partition in the Master Boot Record. But a number of unexpected things happened when I attempted to delete that partition. The first thing I tried was to use the Windows 7 (64-bit) Disk Management tool (diskmgmt.msc) to delete the partition. It would not let me. The context menu choice to delete that volume was not available. Next I opened up a command prompt window with Admin authority and ran diskpart. Diskpart deleted the volume for me. However, when I attempted to run an HD Tune write test on the drive I still got the "Writing is disabled" message. Huh??? So I fired up a utility I have which allows viewing drives at the sector level and verified that the partition table in the Master Boot Record was empty. No partitions. Yet HD Tune still thought there were partitions on the drive? So why was I still getting the "Writing is disabled" message from HD Tune Pro? And why wouldn't the Windows 7 Disk Management tool let me change the partitions on this drive. After doing the above, I plugged the ADATA into my MacBook. I was then able to format it as either a GPT or MBR partitioned drive with no problems. I am not looking for suggestions on how to format this drive. I can do that. What I do not understand and was hoping I might get insight into is why this drive behaves so strangely under Windows 7? And BTW, what's up with HD Tune Pro? BTW, if I plug the drive I formatted on my MacBook back into my Windows 7 64-bit system I still run into road blocks with the Disk Management tool. For example, I cannot delete all the GPT partitions on the ADATA so I can convert it into an MBR drive. I following Microsoft's instructions, the instructions just do not work with this ADATA flash drive. Anyone know what's up with this? It makes no sense to me. Has something changed in Windows 7 (Vista)??

    Read the article

  • Disadvantages of not having a swap partition

    - by Bo Tian
    I recently installed Ubuntu 10.04 on my laptop. Due to space constraint of the SSD, I did not set a swap partition for the OS, and I have 1.5GB of RAM. There's a warning during installation, but I think it's not a big deal since everything went smoothly. For the long term, would there be any drawbacks of not having a swap partition?

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • Convert partition to Virtual Disk image

    - by Rick
    I have a 250 GB HD with a XP partition. I partitioned the XP Box to 112 GB, since the max Virtual PC can load is 127 GB. I have a new motherboard and can't load into the partition, so I am using Windows 7. I have tried using WinImage to create the image but it creates an image of the whole disc (250 GB) and will not load on Virtual PC cause of the size limit. What would be best to convert to VHD correctly?

    Read the article

  • Read NTFS partition on RHEL 5.8

    - by Alex Farber
    I have RHEL 5.8 64 bit, and NTFS partition on the same disk. How can I get access to this partition? This answer Unable to mount NTFS drive with RHEL 6 doesn't work for me: [root@localhost alex]# rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm Retrieving http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm error: skipping http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm - transfer failed - Unknown or unexpected error

    Read the article

  • Delete data from a SQL Server database on a full partition

    - by aleroot
    I have a SQL Server 2005 Database on a dedicated partition, during the time the database grown and now it have occupied all the space on the partition, now the problem is that the only operation I can do on the database is detach, but i want to remove old data from some tables to save space ... How can I remove old data from the database if SQL Server interface doesn't allow to run queries on it ?

    Read the article

  • LVM Extend... not sure the filesystem

    - by Dan
    I would like to extend my LVM partition. First I did lvextend -L +100G /dev/server/home Now I still have to extend the filesystem. The tutorials tell me to use resize2fs, but that only works for ext2 and ext3. I'm not even sure what filesystem I have... fdisk /dev/server/home/ doesn't work... how do I know what kind of filesystem I have on my lvm partition?

    Read the article

  • How to verify /boot partition on encrypted LVM setup

    - by ml43
    Isn't unencrypted /boot partition a weakness for encrypted LVM setup? Attacker may install a malware to /boot partition so that it may sniff encryption password next time system boots. It may also be done by a malware installed to Windows on dual-boot system without any physical access. Am I missing some protection scheme or at least I may verify that /boot contents didn't change since last system shutdown?

    Read the article

  • optimizing a windows server 2003 storage capacity

    - by Hosni
    I have got a windows server 2003 with partitioned Hard drive 10Go and 80Go, and i want to improve the storage capacity as the little partition 10Go is almost full. So i have got choice between partition the hard drive to equal parts, or set up a new hard drive with better storage capacity.knowing that the server has to be on service as soon as possible. Which one may be the better solution that takes less time and less risks?

    Read the article

  • Can't use Bootcamp partition for Windows 8 installation

    - by Hedge
    I'm trying to install Windows 8 with Bootcamp on my Macbook Pro. Sadly it won't let me get past the disk partition choice (even after formatting the Bootcamp-drive). It says: Windows can't be installed on this storage device. The chosen harddisk contains a MBR-partition-table. Windows can only be installed on GPT-harddisks on EFI-systems. freely translated What is going wrong here? Here's a photo:

    Read the article

  • Migrating VMWare Fusion Bootcamp Partition to New Mac

    - by 107217170653252726124
    What is the best way for me to migrate my VMWare Fusion Bootcamp Partition to a new MAC. Preferably I would like to import the current bootcamp partition in as a strict virtual machine on the new mac, instead of in bootcamp. The "older mac" is about two years old, so I don't think there will be any compatibility issues, but i do not have enough disc space on the old machine to import it as a virtual machine, and then migrate it. Thanks for the help.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • How do I recover a BTRFS filesystem with "parent transid verify failed" errors?

    - by Evan P.
    I've got an external USB disk running btrfs. I use it for backups, and each time I do a backup I take a snapshot. However, it's giving me this error now: parent transid verify failed on 109973766144 wanted 1823 found 1821 parent transid verify failed on 109973766144 wanted 1823 found 1821 Obviously, this is a non-critical disk, but I have a few files on here that aren't available elsewhere. Is there any way to recover data from this disk? Maybe by mounting one of the snapshots as root?

    Read the article

  • sudo dhclient eth0 | sudo: unable to resolve host ubuntu

    - by Merianos Nikos
    I have a computer of a friend of mine, that runs Ubuntu (I don't know what version, due to the current system status) and while he was updating the kernel, he reboot the computer (yes that could be happen !!, anyway) Currently I am trying to recover the system by using a live USB, with Ubuntu installed on it. What I am doing, is the following: Update Failure The problem is that when I try to execute the fifth step, I am getting error because I do not have Internet access. The computer is properly wired on my rooter, and I have Internet access in any place apart of the shell. This message for example is send it via the live USB. but I cannot access the Internet via the shell. In my shell I try to use this command: sudo dhclient eth0 but the result of this command is the following message sudo: unable to resolve host ubuntu My hosts file has the following content: 127.0.0.1 localhost 127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts how can I get connected on the Internet, in order to download the appropriate updates ? UPDATE 1 I just notice, that when I execute the ifconfig I am getting the following warning: Warning: cannot open /proc/net/dev (No such file or directory). Limited output. UPDATE 2 I just found that, and looks like solving the problem with dhclient eth0 command, but still I cannot ping Google UPDATE 3 Now the sudo dhclient eth0 returns the following message: RTNETLINK answers: File exists UPDATE 4 I just ping my rooter and I getting response, so, it is looks like I cannot ping outside the rooter (ie. Google) Kind regards ...

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Raid 5 mdadm Problem - Help Please

    - by user66260
    My Raid 5 array (4 1tb Disks WD10EARS) had was showing as degraded. I looked and one of the disks wasnt installed, so i re-added it with the mdadm add command. the array is now showing as (null)Array , but cant be mounted if i run: root@warren-P5K-E:/home/warren# sudo mdadm --misc --detail /dev/md0 I get: mdadm: cannot open /dev/md0: No such file or directory and running: root@warren-P5K-E:/home/warren# cat /proc/mdstat gives me: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: < none > The data is very important root@warren-P5K-E:/home/warren# mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b792 - correct Events : 1 Number Major Minor RaidDevice State this 1 8 0 1 spare /dev/sda 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7a0 - correct Events : 1 Number Major Minor RaidDevice State this 0 8 16 0 spare /dev/sdb 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# oot@warren-P5K-E:/home/warren# mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7b4 - correct Events : 1 Number Major Minor RaidDevice State this 2 8 32 2 spare /dev/sdc 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7c6 - correct Events : 1 Number Major Minor RaidDevice State this 3 8 48 3 spare /dev/sdd 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd That on the 4 drives.

    Read the article

  • Troubleshoot broken ZFS

    - by BBK
    I have one zpool called tank in RaidZ1 with 5x1TB SATA HDDs. I'm using Ubuntu Server 11.10 Oneric, kernel 3.0.0-15-server. Installed ZFS from ppa also I'm using zfs-auto-snapshot. The ZFS file system when zfs module loaded to the kernel hangs my computer. Before it I created few new file systems: zfs create -V 10G tank/iscsi1 zfs create -V 10G tank/iscsi2 zfs create -V 10G tank/iscsi3 I shared them through iSCSI by /dev/tank/iscsiX path. And my computer started to hanging sometimes when I used tank/iscsiX by iSCSI, do not know why exactly. I switched off iSCSI and started to remove this file systems: zfs destroy tank/iscsi3 I'm also using zfs-auto-snapshot so I had snapshots and without -r key my command not destroying the FS. So I issued next command: zfs destroy tank/iscsi3 -r The tank/iscsi3 FS was clean and contain nothing - it was destroyed without an issue. But tank/iscsi2 and tank/iscsi1 contained a lot of information. I tried zfs destroy tank/iscsi2 -r After some time my computer hang out. I rebooted computer. It didn't boot very fast, HDDs starts working like a crazy making a lot of noise, after 15 minutes HDDs stopped go crazy and OS booted at last. All seems to be ok - tank/iscsi2 was destroyed. After file systems at the tank was accessible, zpool status showed no corruption. I issued new command: zfs destroy tank/iscsi1 -r Situation was repeated - after some time my computer hang out. But this time ZFS seams not to healed itself. After computer switched on it started to work: loading scripts and kernel modules, after zfs starting to work it hanging my computer. I need to recover else ZFS file systems which lying in the same zpool. Few month ago I backup OS to flash drive. Booting from backed-up OS and import have the same results - OS starts hanging. How to recover my data at ZFS tank?

    Read the article

  • How can one unlock a fully encrypted Ubuntu 11.10 system over SSH at boot?

    - by Jeff
    In previous versions of Ubuntu, and current versions of Debian, you can unlock a fully encrypted system (using dmcrypt and LUKS) at boot time over SSH. It was as easy as: Installing the encrypted system using the Ubuntu alternate installer disk or normal Debian installer disk and choosing to encrypt the system. After the system is installed, adding the dropbear and busybox packages. Updating the initram-fs to authorize your ssh key. At boot time, you'd just ssh to the machine, and do: echo -ne "keyphrase" > /lib/cryptsetup/passfifo The machine would then unlock and boot the encrypted system. Using the exact same steps on Ubuntu 11.10, I can ssh to the machine, but /lib/cryptsetup/passfifo doesn't exist. There appears to be no way to unlock the system over ssh. I'm not sure where to look to see if this functionality changed or if it was removed.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >