Search Results

Search found 159 results on 7 pages for 'lv'.

Page 1/7 | 1 2 3 4 5 6 7  | Next Page >

  • LVM mirror attempt results in "Insufficient free space"

    - by MattK
    Attempting to add a disk to mirror an LVM volume on CentOS 7 always fails with "Insufficient free space: 1 extents needed, but only 0 available". Having searched for a solution, I have tried specifying disks, multiple logging options, adding 3rd log partition, but have not found a solution Not sure if I am making a rookie mistake, or there is something more subtle wrong (I am more familiar with ZFS, new to using LVM): # lvconvert -m1 centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --mirrorlog mirrored --alloc anywhere centos_bi/home /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available # lvconvert -m1 --corelog --alloc anywhere centos_bi/home /dev/sdi2 /dev/sda2 Insufficient free space: 1 extents needed, but only 0 available The two disks are of the same size, and have identical partition layouts via "sfdisk -d /dev/sdi part_table; sfdisk /dev/sda < part_table". The current configuration is detailed below. # pvs PV VG Fmt Attr PSize PFree /dev/sda1 centos_bi lvm2 a-- 496.00m 496.00m /dev/sda2 centos_bi lvm2 a-- 465.27g 465.27g /dev/sdi2 centos_bi lvm2 a-- 465.27g 0 # vgs VG #PV #LV #SN Attr VSize VFree centos_bi 3 3 0 wz--n- 931.02g 465.75g # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices home centos_bi -wi-ao---- 391.64g /dev/sdi2(6050) root centos_bi -wi-ao---- 50.00g /dev/sdi2(106309) swap centos_bi -wi-ao---- 23.63g /dev/sdi2(0) # pvdisplay --- Physical volume --- PV Name /dev/sdi2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 119109 Free PE 0 Allocated PE 119109 --- Physical volume --- PV Name /dev/sda2 VG Name centos_bi PV Size 465.27 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 119109 Free PE 119109 Allocated PE 0 --- Physical volume --- PV Name /dev/sda1 VG Name centos_bi PV Size 500.00 MiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 124 Free PE 124 Allocated PE 0 # vgdisplay --- Volume group --- VG Name centos_bi System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 10 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 3 Act PV 3 VG Size 931.02 GiB PE Size 4.00 MiB Total PE 238342 Alloc PE / Size 119109 / 465.27 GiB Free PE / Size 119233 / 465.75 GiB # lvdisplay --- Logical volume --- LV Path /dev/centos_bi/swap LV Name swap VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:34 -0400 LV Status available # open 2 LV Size 23.63 GiB Current LE 6050 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos_bi/home LV Name home VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:35 -0400 LV Status available # open 1 LV Size 391.64 GiB Current LE 100259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/centos_bi/root LV Name root VG Name centos_bi LV Write Access read/write LV Creation host, time localhost, 2014-08-07 16:34:37 -0400 LV Status available # open 1 LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0

    Read the article

  • Deleting duplicates in Delphi listview

    - by radick
    I am trying to remove duplicates in my listview. This function: procedure RemoveDuplicates(const LV:TbsSkinListView); var i,j: Integer; begin LV.Items.BeginUpdate; LV.SortType := stText; try for i := 0 to LV.Items.Count-1 do begin for j:=i+1 to LV.Items.Count-1 do begin if SameText(LV.Items[i].SubItems[0], LV.Items[j].SubItems[0]) and SameText(LV.Items[i].SubItems[1], LV.Items[j].SubItems[1]) and SameText(LV.Items[i].SubItems[2], LV.Items[j].SubItems[2]) and SameText(LV.Items[i].SubItems[3], LV.Items[j].SubItems[3]) then LV.Items.Delete(j); end; end; finally LV.SortType := stNone; LV.Items.EndUpdate; end; ShowMessage('Deleted'); end; does not delete the duplicates. What is wrong with it?

    Read the article

  • deleting dublicates in listview delphi

    - by radick
    hi all I am trying to remove dublicates in my listview my function like this below procedure RemoveDuplicates(const LV:TbsSkinListView); var i,j: Integer; begin LV.Items.BeginUpdate; LV.SortType := stText; try for i := 0 to LV.Items.Count-1 do begin for j:=i+1 to LV.Items.Count-1 do begin if SameText(LV.Items[i].SubItems[0], LV.Items[j].SubItems[0]) and SameText(LV.Items[i].SubItems[1], LV.Items[j].SubItems[1]) and SameText(LV.Items[i].SubItems[2], LV.Items[j].SubItems[2]) and SameText(LV.Items[i].SubItems[3], LV.Items[j].SubItems[3]) then LV.Items.Delete(j); end; end; finally LV.SortType := stNone; LV.Items.EndUpdate; end; ShowMessage('Deleted'); end; but its not doing what i intended can anyone help me ?

    Read the article

  • Removing vg and lv after physical drive has been removed

    - by Rene
    We had a disk fail and replace but forgot to vgreduce first. The drive was on it's own VG and with two empty LV's and these are now causing LVM to complain every time any command is run, i.e. # lvscan /dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error The two LV's are not important, I just want to stop this error from displaying.

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • Getting an error when mounting LVM snapshot

    - by Sandra
    I have migrated a file based Xen guest to LVM using dd bs=1M if=/dev/zero of=/dev/vg00/vm10 qemu-img convert ~/vm10.qcow2 -O raw /dev/vg00/vm10 and changed the Xen domain file for the VM to use the LV instead of the old file. The VM boots up, and now on the Xen host would I like to make a snapshot of the running VM. # lvcreate --size 10G --snapshot --name vm10-snapshot /dev/vg00/vm10 Logical volume "vm10-snapshot" created # mount /dev/vg00/vm10-snapshot /mnt/snapshot/ mount: you must specify the filesystem type # dmesg |tail EXT3 FS on dm-3, internal journal EXT3-fs: mounted filesystem with ordered data mode. hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-4. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock VFS: Can't find ext3 filesystem on dev dm-2. hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock hfs: unable to find HFS+ superblock For some reason it can't see it is an EXT3 filesystem. I have also tried to mount with -t ext3, but still didn't mount. # lvdisplay --- Logical volume --- LV Name /dev/vg00/vm10 VG Name vg00 LV UUID I1y1vQ-Bac5-5jwW-melh-TY5h-l9NO-qaelKk LV Write Access read/write LV snapshot status source of /dev/vg00/vm10-snapshot [active] LV Status available # open 2 LV Size 8.00 GB Current LE 2048 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/vg00/vm10-snapshot VG Name vg00 LV UUID GWsOx3-TPpr-GW64-uiMz-u1YN-QU4h-l0Kala LV Write Access read/write LV snapshot status active destination for /dev/vg00/vm10 LV Status available # open 0 LV Size 8.00 GB Current LE 2048 COW-table size 10.00 GB COW-table LE 2560 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 # What could the problem be?

    Read the article

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • LVM / Device Mapper maps wrong device

    - by DaDaDom
    Hi, I run a LVM setup on a raid1 created by mdadm. md2 is based on sda6 (major:minor 8:6) and sdb6 (8:22). md2 is partition 9:2. The VG on top of md2 has 4 LVs, var, home, usr, tmp. First the problem: While booting it seems as if the device mapper takes the wrong partition for the mapping! Immediately after boot the information is like ~# dmsetup table systemlvm-home: 0 4194304 linear 8:22 384 systemlvm-home: 4194304 16777216 linear 8:22 69206400 systemlvm-home: 20971520 8388608 linear 8:22 119538048 systemlvm-home: 29360128 6291456 linear 8:22 243270016 systemlvm-tmp: 0 2097152 linear 8:22 41943424 systemlvm-usr: 0 10485760 linear 8:22 20971904 systemlvm-var: 0 10485760 linear 8:22 10486144 systemlvm-var: 10485760 6291456 linear 8:22 4194688 systemlvm-var: 16777216 4194304 linear 8:22 44040576 systemlvm-var: 20971520 10485760 linear 8:22 31457664 systemlvm-var: 31457280 20971520 linear 8:22 48234880 systemlvm-var: 52428800 33554432 linear 8:22 85983616 systemlvm-var: 85983232 115343360 linear 8:22 127926656 ~# cat /proc/mdstat Personalities : [raid1] md2 : active (auto-read-only) raid1 sda6[0] 151798080 blocks [2/1] [U_] md0 : active raid1 sda1[0] sdb1[1] 96256 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 2931776 blocks [2/2] [UU] I have to manually "lvchange -an" all LVs, add /dev/sdb6 back to the raid and reactivate the LVs, then all is fine. But it prevents me from automounting the partitions and obviously leads to a bunch of other problems. If everything works fine, the information is like ~$ cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb6[1] sda6[0] 151798080 blocks [2/2] [UU] ... ~# dmsetup table systemlvm-home: 0 4194304 linear 9:2 384 systemlvm-home: 4194304 16777216 linear 9:2 69206400 systemlvm-home: 20971520 8388608 linear 9:2 119538048 systemlvm-home: 29360128 6291456 linear 9:2 243270016 systemlvm-tmp: 0 2097152 linear 9:2 41943424 systemlvm-usr: 0 10485760 linear 9:2 20971904 systemlvm-var: 0 10485760 linear 9:2 10486144 systemlvm-var: 10485760 6291456 linear 9:2 4194688 systemlvm-var: 16777216 4194304 linear 9:2 44040576 systemlvm-var: 20971520 10485760 linear 9:2 31457664 systemlvm-var: 31457280 20971520 linear 9:2 48234880 systemlvm-var: 52428800 33554432 linear 9:2 85983616 systemlvm-var: 85983232 115343360 linear 9:2 127926656 I think that LVM for some reason just "takes" /dev/sdb6 which is then missing in the raid. I tried almost all options in the lvm.conf but none seems to work. Below is some more information, like config files. Does anyone have any idea about what is going on here and how to prevent that? If you need any additional information, please let me know Thanks in advance! Dominik The information (off a "repaired" system): ~# cat /etc/debian_version 5.0.4 ~# uname -a Linux kermit 2.6.26-2-686 #1 SMP Wed Feb 10 08:59:21 UTC 2010 i686 GNU/Linux ~# lvm version LVM version: 2.02.39 (2008-06-27) Library version: 1.02.27 (2008-06-25) Driver version: 4.13.0 ~# cat /etc/mdadm/mdadm.conf DEVICE partitions ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=11e9dc6c:1da99f3f:b3088ca6:c6fe60e9 ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=92ed1e4b:897361d3:070682b3:3baa4fa1 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=00.90 UUID=601d4642:39dc80d7:96e8bbac:649924ba ~# mount /dev/md1 on / type ext3 (rw,errors=remount-ro) tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) procbususb on /proc/bus/usb type usbfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620) /dev/md0 on /boot type ext3 (rw) /dev/mapper/systemlvm-usr on /usr type reiserfs (rw) /dev/mapper/systemlvm-tmp on /tmp type reiserfs (rw) /dev/mapper/systemlvm-home on /home type reiserfs (rw) /dev/mapper/systemlvm-var on /var type reiserfs (rw) ~# grep -v ^$ /etc/lvm/lvm.conf | grep -v "#" devices { dir = "/dev" scan = [ "/dev" ] preferred_names = [ ] filter = [ "a|/dev/md.*|", "r/.*/" ] cache_dir = "/etc/lvm/cache" cache_file_prefix = "" write_cache_state = 1 sysfs_scan = 1 md_component_detection = 1 ignore_suspended_devices = 0 } log { verbose = 0 syslog = 1 overwrite = 0 level = 0 indent = 1 command_names = 0 prefix = " " } backup { backup = 1 backup_dir = "/etc/lvm/backup" archive = 1 archive_dir = "/etc/lvm/archive" retain_min = 10 retain_days = 30 } shell { history_size = 100 } global { umask = 077 test = 0 units = "h" activation = 1 proc = "/proc" locking_type = 1 fallback_to_clustered_locking = 1 fallback_to_local_locking = 1 locking_dir = "/lib/init/rw" } activation { missing_stripe_filler = "/dev/ioerror" reserved_stack = 256 reserved_memory = 8192 process_priority = -18 mirror_region_size = 512 readahead = "auto" mirror_log_fault_policy = "allocate" mirror_device_fault_policy = "remove" } :~# vgscan -vvv Processing: vgscan -vvv O_DIRECT will be used Setting global/locking_type to 1 File-based locking selected. Setting global/locking_dir to /lib/init/rw Locking /lib/init/rw/P_global WB Wiping cache of LVM-capable devices /dev/block/1:0: Added to device cache /dev/block/1:1: Added to device cache /dev/block/1:10: Added to device cache /dev/block/1:11: Added to device cache /dev/block/1:12: Added to device cache /dev/block/1:13: Added to device cache /dev/block/1:14: Added to device cache /dev/block/1:15: Added to device cache /dev/block/1:2: Added to device cache /dev/block/1:3: Added to device cache /dev/block/1:4: Added to device cache /dev/block/1:5: Added to device cache /dev/block/1:6: Added to device cache /dev/block/1:7: Added to device cache /dev/block/1:8: Added to device cache /dev/block/1:9: Added to device cache /dev/block/253:0: Added to device cache /dev/block/253:1: Added to device cache /dev/block/253:2: Added to device cache /dev/block/253:3: Added to device cache /dev/block/8:0: Added to device cache /dev/block/8:1: Added to device cache /dev/block/8:16: Added to device cache /dev/block/8:17: Added to device cache /dev/block/8:18: Added to device cache /dev/block/8:19: Added to device cache /dev/block/8:2: Added to device cache /dev/block/8:21: Added to device cache /dev/block/8:22: Added to device cache /dev/block/8:3: Added to device cache /dev/block/8:5: Added to device cache /dev/block/8:6: Added to device cache /dev/block/9:0: Already in device cache /dev/block/9:1: Already in device cache /dev/block/9:2: Already in device cache /dev/bsg/0:0:0:0: Not a block device /dev/bsg/1:0:0:0: Not a block device /dev/bus/usb/001/001: Not a block device [... many more "not a block device"] /dev/core: Not a block device /dev/cpu_dma_latency: Not a block device /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-id/dm-name-systemlvm-home: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/dm-name-systemlvm-tmp: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-name-systemlvm-usr: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-name-systemlvm-var: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr25N7CRZpUMzR18NfS6zeSeAVnVT98LuU: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr3TpFXtLjYGEwn79IdXsSCZPl8AxmqbmQ: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrc5MJ4KolevMjt85PPBrQuRTkXbx6NvTi: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrYXrfdg5OSYDVkNeiQeQksgCI849Z2hx8: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/md-uuid-11e9dc6c:1da99f3f:b3088ca6:c6fe60e9: Already in device cache /dev/disk/by-id/md-uuid-601d4642:39dc80d7:96e8bbac:649924ba: Already in device cache /dev/disk/by-id/md-uuid-92ed1e4b:897361d3:070682b3:3baa4fa1: Already in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0: Aliased to /dev/block/8:0 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0: Aliased to /dev/block/8:16 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-uuid/13c1262b-e06f-40ce-b088-ce410640a6dc: Aliased to /dev/block/253:3 in device cache /dev/disk/by-uuid/379f57b0-2e03-414c-808a-f76160617336: Aliased to /dev/block/253:2 in device cache /dev/disk/by-uuid/4fb2d6d3-bd51-48d3-95ee-8e404faf243d: Already in device cache /dev/disk/by-uuid/5c6728ec-82c1-49c0-93c5-f6dbd5c0d659: Aliased to /dev/block/8:5 in device cache /dev/disk/by-uuid/a13cdfcd-2191-4185-a727-ffefaf7a382e: Aliased to /dev/block/253:1 in device cache /dev/disk/by-uuid/e0d5893d-ff88-412f-b753-9e3e9af3242d: Aliased to /dev/block/8:21 in device cache /dev/disk/by-uuid/e79c9da6-8533-4e55-93ec-208876671edc: Aliased to /dev/block/253:0 in device cache /dev/disk/by-uuid/f3f176f5-12f7-4af8-952a-c6ac43a6e332: Already in device cache /dev/dm-0: Aliased to /dev/block/253:0 in device cache (preferred name) /dev/dm-1: Aliased to /dev/block/253:1 in device cache (preferred name) /dev/dm-2: Aliased to /dev/block/253:2 in device cache (preferred name) /dev/dm-3: Aliased to /dev/block/253:3 in device cache (preferred name) /dev/fd: Symbolic link to directory /dev/full: Not a block device /dev/hpet: Not a block device /dev/initctl: Not a block device /dev/input/by-path/platform-i8042-serio-0-event-kbd: Not a block device /dev/input/event0: Not a block device /dev/input/mice: Not a block device /dev/kmem: Not a block device /dev/kmsg: Not a block device /dev/log: Not a block device /dev/loop/0: Added to device cache /dev/MAKEDEV: Not a block device /dev/mapper/control: Not a block device /dev/mapper/systemlvm-home: Aliased to /dev/dm-2 in device cache /dev/mapper/systemlvm-tmp: Aliased to /dev/dm-3 in device cache /dev/mapper/systemlvm-usr: Aliased to /dev/dm-1 in device cache /dev/mapper/systemlvm-var: Aliased to /dev/dm-0 in device cache /dev/md0: Already in device cache /dev/md1: Already in device cache /dev/md2: Already in device cache /dev/mem: Not a block device /dev/net/tun: Not a block device /dev/network_latency: Not a block device /dev/network_throughput: Not a block device /dev/null: Not a block device /dev/port: Not a block device /dev/ppp: Not a block device /dev/psaux: Not a block device /dev/ptmx: Not a block device /dev/pts/0: Not a block device /dev/ram0: Aliased to /dev/block/1:0 in device cache (preferred name) /dev/ram1: Aliased to /dev/block/1:1 in device cache (preferred name) /dev/ram10: Aliased to /dev/block/1:10 in device cache (preferred name) /dev/ram11: Aliased to /dev/block/1:11 in device cache (preferred name) /dev/ram12: Aliased to /dev/block/1:12 in device cache (preferred name) /dev/ram13: Aliased to /dev/block/1:13 in device cache (preferred name) /dev/ram14: Aliased to /dev/block/1:14 in device cache (preferred name) /dev/ram15: Aliased to /dev/block/1:15 in device cache (preferred name) /dev/ram2: Aliased to /dev/block/1:2 in device cache (preferred name) /dev/ram3: Aliased to /dev/block/1:3 in device cache (preferred name) /dev/ram4: Aliased to /dev/block/1:4 in device cache (preferred name) /dev/ram5: Aliased to /dev/block/1:5 in device cache (preferred name) /dev/ram6: Aliased to /dev/block/1:6 in device cache (preferred name) /dev/ram7: Aliased to /dev/block/1:7 in device cache (preferred name) /dev/ram8: Aliased to /dev/block/1:8 in device cache (preferred name) /dev/ram9: Aliased to /dev/block/1:9 in device cache (preferred name) /dev/random: Not a block device /dev/root: Already in device cache /dev/rtc: Not a block device /dev/rtc0: Not a block device /dev/sda: Aliased to /dev/block/8:0 in device cache (preferred name) /dev/sda1: Aliased to /dev/block/8:1 in device cache (preferred name) /dev/sda2: Aliased to /dev/block/8:2 in device cache (preferred name) /dev/sda3: Aliased to /dev/block/8:3 in device cache (preferred name) /dev/sda5: Aliased to /dev/block/8:5 in device cache (preferred name) /dev/sda6: Aliased to /dev/block/8:6 in device cache (preferred name) /dev/sdb: Aliased to /dev/block/8:16 in device cache (preferred name) /dev/sdb1: Aliased to /dev/block/8:17 in device cache (preferred name) /dev/sdb2: Aliased to /dev/block/8:18 in device cache (preferred name) /dev/sdb3: Aliased to /dev/block/8:19 in device cache (preferred name) /dev/sdb5: Aliased to /dev/block/8:21 in device cache (preferred name) /dev/sdb6: Aliased to /dev/block/8:22 in device cache (preferred name) /dev/shm/network/ifstate: Not a block device /dev/snapshot: Not a block device /dev/sndstat: stat failed: Datei oder Verzeichnis nicht gefunden /dev/stderr: Not a block device /dev/stdin: Not a block device /dev/stdout: Not a block device /dev/systemlvm/home: Aliased to /dev/dm-2 in device cache /dev/systemlvm/tmp: Aliased to /dev/dm-3 in device cache /dev/systemlvm/usr: Aliased to /dev/dm-1 in device cache /dev/systemlvm/var: Aliased to /dev/dm-0 in device cache /dev/tty: Not a block device /dev/tty0: Not a block device [... many more "not a block device"] /dev/vcsa6: Not a block device /dev/xconsole: Not a block device /dev/zero: Not a block device Wiping internal VG cache lvmcache: initialised VG #orphans_lvm1 lvmcache: initialised VG #orphans_pool lvmcache: initialised VG #orphans_lvm2 Reading all physical volumes. This may take a while... Finding all volume groups /dev/ram0: Skipping (regex) /dev/loop/0: Skipping (sysfs) /dev/sda: Skipping (regex) Opened /dev/md0 RO /dev/md0: size is 192512 sectors Closed /dev/md0 /dev/md0: size is 192512 sectors Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes Closed /dev/md0 Using /dev/md0 Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes /dev/md0: No label detected Closed /dev/md0 /dev/dm-0: Skipping (regex) /dev/ram1: Skipping (regex) /dev/sda1: Skipping (regex) Opened /dev/md1 RO /dev/md1: size is 5863552 sectors Closed /dev/md1 /dev/md1: size is 5863552 sectors Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes Closed /dev/md1 Using /dev/md1 Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes /dev/md1: No label detected Closed /dev/md1 /dev/dm-1: Skipping (regex) /dev/ram2: Skipping (regex) /dev/sda2: Skipping (regex) Opened /dev/md2 RO /dev/md2: size is 303596160 sectors Closed /dev/md2 /dev/md2: size is 303596160 sectors Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes Closed /dev/md2 Using /dev/md2 Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Closed /dev/md2 /dev/dm-2: Skipping (regex) /dev/ram3: Skipping (regex) /dev/sda3: Skipping (regex) /dev/dm-3: Skipping (regex) /dev/ram4: Skipping (regex) /dev/ram5: Skipping (regex) /dev/sda5: Skipping (regex) /dev/ram6: Skipping (regex) /dev/sda6: Skipping (regex) /dev/ram7: Skipping (regex) /dev/ram8: Skipping (regex) /dev/ram9: Skipping (regex) /dev/ram10: Skipping (regex) /dev/ram11: Skipping (regex) /dev/ram12: Skipping (regex) /dev/ram13: Skipping (regex) /dev/ram14: Skipping (regex) /dev/ram15: Skipping (regex) /dev/sdb: Skipping (regex) /dev/sdb1: Skipping (regex) /dev/sdb2: Skipping (regex) /dev/sdb3: Skipping (regex) /dev/sdb5: Skipping (regex) /dev/sdb6: Skipping (regex) Locking /lib/init/rw/V_systemlvm RB Finding volume group "systemlvm" Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mdas /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Using cached label for /dev/md2 Read systemlvm metadata (19) from /dev/md2 at 39936 size 2632 /dev/md2 0: 0 16: home(0:0) /dev/md2 1: 16 24: var(40:0) /dev/md2 2: 40 40: var(0:0) /dev/md2 3: 80 40: usr(0:0) /dev/md2 4: 120 40: var(80:0) /dev/md2 5: 160 8: tmp(0:0) /dev/md2 6: 168 16: var(64:0) /dev/md2 7: 184 80: var(120:0) /dev/md2 8: 264 64: home(16:0) /dev/md2 9: 328 128: var(200:0) /dev/md2 10: 456 32: home(80:0) /dev/md2 11: 488 440: var(328:0) /dev/md2 12: 928 24: home(112:0) /dev/md2 13: 952 206: NULL(0:0) Found volume group "systemlvm" using metadata type lvm2 Read volume group systemlvm from /etc/lvm/backup/systemlvm Unlocking /lib/init/rw/V_systemlvm Closed /dev/md2 Unlocking /lib/init/rw/P_global ~# vgdisplay --- Volume group --- VG Name systemlvm System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 19 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 1 Act PV 1 VG Size 144,75 GB PE Size 128,00 MB Total PE 1158 Alloc PE / Size 952 / 119,00 GB Free PE / Size 206 / 25,75 GB VG UUID rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr ~# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name systemlvm PV Size 144,77 GB / not usable 16,31 MB Allocatable yes PE Size (KByte) 131072 Total PE 1158 Free PE 206 Allocated PE 952 PV UUID ZSAzP5-iBvr-L7jy-wB8T-AiWz-0g3m-HLK66Y :~# lvdisplay --- Logical volume --- LV Name /dev/systemlvm/home VG Name systemlvm LV UUID YXrfdg-5OSY-DVkN-eiQe-Qksg-CI84-9Z2hx8 LV Write Access read/write LV Status available # open 2 LV Size 17,00 GB Current LE 136 Segments 4 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/systemlvm/var VG Name systemlvm LV UUID 25N7CR-ZpUM-zR18-NfS6-zeSe-AVnV-T98LuU LV Write Access read/write LV Status available # open 2 LV Size 96,00 GB Current LE 768 Segments 7 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/systemlvm/usr VG Name systemlvm LV UUID 3TpFXt-LjYG-Ewn7-9IdX-sSCZ-Pl8A-xmqbmQ LV Write Access read/write LV Status available # open 2 LV Size 5,00 GB Current LE 40 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/systemlvm/tmp VG Name systemlvm LV UUID c5MJ4K-olev-Mjt8-5PPB-rQuR-TkXb-x6NvTi LV Write Access read/write LV Status available # open 2 LV Size 1,00 GB Current LE 8 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • Almost All Xenserver Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top. edit2 Output of vgcfgrestore, but from the rescue system, as there is no root to chroot to. # vgcfgrestore --list VG_XenStorage-x-b4b0-x-x-408b91acdcae File: /etc/lvm/archive/VG_XenStorage-x-x-x-x-408b91acdcae_00000.vg VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *before* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013 File: /etc/lvm/backup/VG_XenStorage-x-x-x-x-408b91acdcae VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *after* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • Volume expanded in Volume Group, old disk reduced but still in use in system

    - by Yurij73
    Tryed to add a new hard sdb (not formated) to my virtualbox Centos. Successfully extended an existing vg_localhost to /dev/sdb/ # lvdisplay --- Logical volume --- LV Path /dev/vg_localhost/lv_root LV Name lv_root VG Name vg_localhost LV UUID DkYX7D-DMud-vLaI-tfnz-xIJJ-VzHz-bRp3tO LV Write Access read/write LV Creation host, time localhost.centos, 2012-12-17 LV Status available # open 1 LV Size 18,03 GiB Current LE 4615 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 20G 0 disk +-vg_localhost-lv_root (dm-0) 253:0 0 18G 0 lvm / +-vg_localhost-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP] sda 8:0 0 9G 0 disk +-sda1 8:1 0 500M 0 part /boot +-sda2 8:2 0 8,5G 0 part sr0 11:0 1 1024M 0 rom df -h /dev/mapper/vg_localhost-lv_root 6,5G 6,2G 256M 97% / tmpfs 499M 200K 499M 1% /dev/shm /dev/sda1 485M 78M 382M 17% /boot it still old sda in use, what i have to do further?

    Read the article

  • [C#] Startup List

    - by Power-Mosfet
    Hi, How do I gat the the full .exe path ManagementClass mangnmt = new ManagementClass("Win32_StartupCommand"); ManagementObjectCollection mcol = mangnmt.GetInstances(); foreach (ManagementObject strt in mcol) { string[] lv = new String[4]; lv[0] = strt["Caption"].ToString(); lv[1] = strt["Location"].ToString(); lv[2] = strt["Command"].ToString(); lv[3] = strt["Description"].ToString(); listView1.Items.Add(new ListViewItem(lv, 0)); }

    Read the article

  • Does /boot safe on top of a lvm LV (logical volume)?

    - by fantoman
    Title already asked the question. More specifically, I read in some documents that logical volumes are nice in general but not for /boot in a linux system. They say that bootloaders don't understand LVM volumes, so create a separate partition for /boot out of lvm. I recently installed Ubuntu server (9.10) for my home server, but by default /boot is created in the LVM. Everything is fine now, but I am not sure it is safe to use /boot in LVM. Second question is do I really need a physical partition (volume)(pv) for /boot or is it equally fine if I put it into a logical volume (lv) on top of a single shared volume group. Thanks in advance.

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • Accessing host LVM partition from Windows XP through Virt.manager 0.8.5 / Qemu / KVM

    - by Nico de Smidt
    Hi, requested use case is having a Windows XP SP3 guest running in 64bit Ubuntu. (Linux pcs 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux) I want this guest to access an LVM LV on the Ubuntu disk. I've setup the following LVM config: --- Logical volume --- LV Name /dev/storage/sdc1 VG Name storage LV UUID Zg5IMC-OlqB-prL5-fgg4-3A9A-OgKP-oZ0QkJ LV Write Access read/write LV Status available # open 0 LV Size 1.01 GiB Current LE 259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3 -- 1) I've setup a storage pool for /dev/storage 2) I've mkfs.vfat /dev/storage/sdc1 3) and made a virtual IDE disk in the virt-manager setup for the guest. Target device: IDE Disk 2 Source path: /dev/storage/sdc1 -- Now when running XP (guest) Windows sees a new disk in Disk Manager and want's to install a partition on it, since it believes the drive is empty. After formatting from within Windows I can put data on the new disk volume. -- Back in Ubuntu however I cannot access this this any more since it created a partition within an LVM Logical Volume. Running fdisk -l shows the following: root@pcs:/media# fdisk -l /dev/storage/sdc1 Disk /dev/storage/sdc1: 1086 MB, 1086324736 bytes 32 heads, 63 sectors/track, 1052 cylinders Units = cylinders of 2016 * 512 = 1032192 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d72e4f4 Device Boot Start End Blocks Id System /dev/storage/sdc1p1 1 1050 1058368+ c W95 FAT32 (LBA) -- which seems fine to me, but when trying to mount /dev/storage/sdc1p1 I get the following error: mount /dev/storage/sdc1p1 /media/xp mount: special device /dev/storage/sdc1p1 does not exist which makes sense since in lvdisplay sdc1p1 does not exist Main question: I want to mount the vfat partition in both Ubuntu and XP What am I missing here????? regards, and thanks for your consideration. Nico

    Read the article

  • How to install Ubuntu 12.04.1 in EFI mode with Encrypted LVM?

    - by g0lem
    I'm trying to properly install Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with the alternate install CD ".iso" on a lenovo Thinkpad X220. Default Hard Disk (with a pre-installed version of Windows 7) has been replaced with a brand new SSD. The UEFI BIOS of the lenovo Thinkpad X220 is set to "UEFI Boot only" & "USB UEFI BIOS Support" is enabled (I'm using an external USB DVD reader to perform Ubuntu installation). The BIOS is a Phoenix SecureCore Tiano, BIOS version is 8DET56WW (1.26). The attempts below are made with the UEFI BIOS settings described above. Here's what I've tried so far: Boot on a live GParted CD Create a GPT partition table Create a FAT32 partition for UEFI System, set the partition to "EF00" type ("boot" flag) Leave remaining space unformated Boot on Ubuntu 12.04.1 LTS 64-bit PC (AMD64) with alternate CD: Perform the install with network updates enabled Use manual partitioning FAT32 partition created with GParted is used as "EFI System partition" Remaining space is set to be used as "Physical volume for LVM" Then "Configure encrypted volumes" using the previous "Physical volume for LVM" as the encrypted container, passphrase is setup. "Configure the Logical Volume Manager" creating a volume Group using the encrypted container /dev/mapper/sda2_crypt Creation of the Logical Volumes "Create logical volume", choosing the previously created volume Group Assign a mount point and file system to the Logical volumes : LV-root for / LV-var for /var LV-usr for /usr LV-usr-local for /usr/local LV-swap for swap LV-home for /home NOTE: /tmp would be in RAM only using TMPFS Bootloader step: neither my ESP partition (/dev/sda1, /dev/sda or MBR) seems to be the right place for GRUB, I get the following message (X suffix is for demonstration only): unable to install grub in /dev/sdaX Executing 'grub-install /dev/sdaX' failed This is a fatal error. Finish installation without the Bootloader & Reboot The system doesn't start, there's no EFI/GRUB menu at startup. What are the steps to perform a clean and working installation of Ubuntu 12.04.1 Precise Pangolin, 64bit version in U(EFI) mode using the encrypted LUKS + LVM scheme described above?

    Read the article

  • Does /boot safe on top of a lvm LV (logical volume)?

    - by fantoman
    Title already asked the question. More specifically, I read in some documents that logical volumes are nice in general but not for /boot in a linux system. They say that bootloaders don't understand LVM volumes, so create a separate partition for /boot out of lvm. I recently installed Ubuntu server (9.10) for my home server, but by default /boot is created in the LVM. Everything is fine now, but I am not sure it is safe to use /boot in LVM. Second question is do I really need a physical partition (volume)(pv) for /boot or is it equally fine if I put it into a logical volume (lv) on top of a single shared volume group. Thanks in advance.

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • Reversing an lvreduce of LVM to original size

    - by praspa
    On a RHEL system that uses LVM 2 with 4K blocks. Have been successful in reducing the LV, but trying to get steps to reverse the operation so that the LV returns to its original size. Using these steps to reduce the LV by 1GB, # umount /foo # e2fsk -f /dev/mylvm/foo # resize2fs /dev/mylvm/foo <Current LV Block count - 1GB/4K> # lvreduce --size <Current # GB - 1GB> /dev/mylvm/foo Then to reverse the reduction # lvextend --size <Original #GB> /dev/mylvm/foo # resize2fs /dev/mylvm/foo The reversal gets close to the orignal size. A 'df -h' reports that it seems to be about ~ 0.1GB shy of the original size. Using these utilities, is there a better procedure to shrink and grow the LV so that the original state can be recovered effectively?

    Read the article

  • Error while mounting home directory on different logical volume

    - by RCola
    I created RAID 5 form 3 hard drives. Formatted as ext4 this raid array. Created VG0 group and lv_home logical volume in LVM. Then I tried to mount default /home directory on lv_home, while trying to mount logical volume lv_home to folder containing user profiles /home, getting error: mount: wrong fs type, bad option, bad superblock on /dev/mapper/VG0-lv_home next is seems to be symbolic link: # file -s /dev/VG0/lv_home /dev/VG0/lv_home: symbolic link to `../mapper/VG0-lv_home' then # file -s /dev/mapper/VG0-lv_home /dev/mapper/VG0-lv_home: data and lvm> pvs PV VG Fmt Attr PSize PFree /dev/md0 VG0 lvm2 a- 2.02g 68.00m lvm> lvdisplay --- Logical volume --- LV Name /dev/VG0/lv_home VG Name VG0 LV UUID WzJus7-2yV8-yhog-Ju1b-TpWH-IIAI-LIutwe LV Write Access read/write LV Status available # open 0 LV Size 1.17 GiB Current LE 300 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:0

    Read the article

1 2 3 4 5 6 7  | Next Page >