Search Results

Search found 273 results on 11 pages for 'tmpfs'.

Page 8/11 | < Previous Page | 4 5 6 7 8 9 10 11  | Next Page >

  • After using lvextend, I can't recover unused space

    - by Cory Gagliardi
    I needed to add more disk space to my CentOS VM, so I added another virtual disk, then used lvextend to add the space to the existing partition. The steps I followed was: echo "- - -" > /sys/class/scsi_host/host0/scan pvcreate /dev/sdb vgextend VolGroup00 /dev/sdb lvextend -l +100%FREE /dev/VolGroup00/LogVol00 resize2fs /dev/VolGroup00/LogVol00 This worked fine. I subsequently filled up the VM, then deleted most of the used disk space. However, the unused disk space was never recovered after I deleted all of the files. This will illustrate what I'm saying better: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 61G 32G 26G 56% / /dev/sda1 99M 20M 75M 21% /boot tmpfs 1006M 0 1006M 0% /dev/shm # pwd; du -h --max-depth=0 / 5.1G . I cannot figure out how to get the partition to see that only 5.1 GB is used. Any ideas what I'm doing wrong?

    Read the article

  • EBS full device confusion

    - by Mike
    I have a 500GB EBS device (/dev/xvdf) mounted to /vol and all data on the box seems to be writing to /vol correctly (see du output below). For some reason /dev/xvda1 is totally full. Any idea what's going on here? $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 32G 30G 8.0K 100% / udev 34G 8.0K 34G 1% /dev tmpfs 14G 176K 14G 1% /run none 5.0M 0 5.0M 0% /run/lock none 34G 0 34G 0% /run/shm /dev/xvdb 827G 201M 785G 1% /mnt /dev/xvdf 500G 145G 356G 29% /vol $ du -sh * 8.7M bin 18M boot 8.0K dev 5.1M etc 48K home 0 initrd.img 80M lib 4.0K lib64 16K lost+found 4.0K media 20K mnt 4.0K opt 0 proc 40K root 176K run 7.1M sbin 4.0K selinux 4.0K srv 0 sys 4.0K tmp 414M usr 356M var 0 vmlinuz 145G vol

    Read the article

  • Retrieving a specific value from “df -h” using shell

    - by diegodias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • What can I do to give some more love and disk space to my database on Ubuntu?

    - by Yaron Naveh
    I'm new to linux. I've deployed a db to ubuntu server on amazon and found out I'm low on disk space. did df (see below) - and found out that I'm 89% capacity on one file system, but less on others. What does this mean? Do I have a few partitions and can now utilize others besides /dev/xvda1? Also /dev/xvdb seems large, is it safe to put the db in it and only use it? If so do I need to mount it or do something special? $> df -lah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 6.7G 914M 89% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 3.7G 8.0K 3.7G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.5G 164K 1.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.7G 0 3.7G 0% /run/shm /dev/xvdb 414G 199M 393G 1% /mnt

    Read the article

  • Ubuntu 12 Server messing up my hard disk

    - by Jeroen Jacobs
    I'm installing Ubuntu server on a disk with 12GB available. During the setup, I choose the default LVM-based partition layout. However for some reason, Ubuntu decides that it only wants to use 4GB of this disk. How do I reclaim the remaining space of the hard disk? "lvextent" doesn't work btw... output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 4.3G 3.4G 754M 82% / udev 3.9G 4.0K 3.9G 1% /dev tmpfs 1.6G 756K 1.6G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 0 3.9G 0% /run/shm /dev/sda1 228M 25M 192M 12% /boot output of pvdisplay: --- Physical volume --- PV Name /dev/sda5 VG Name ubuntu PV Size 12.32 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 3154 Free PE 8 Allocated PE 3146 PV UUID dD06RZ-kGcL-1tTX-Ruds-XIDG-ssMd-FIUkzZ my partitions: Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 26343423 12920833 5 Extended /dev/sda5 501760 26343423 12920832 8e Linux LVM when I try lvextent, it says there is not enough diskspace.

    Read the article

  • Compiling the Linux kernel, how much size is needed?

    - by ant2009
    I have downloaded the newest most stable Linux kernel, 2.6.33.2. I thought I would test this using VirtualBox. So I create a dynamically sized harddisk of 4 GB. And installed CentOS 5.3 with just the minimum packages. I setup the make menuconfig with just the default settings. After that I ran make and got the following error: net/bluetooth/hci_sysfs.o: final close failed: No space left on device make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1 make[1]: *** [net/bluetooth] Error 2 make: *** [net] Error 2 The amount of space I have left is: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3G 3.3G 0 100% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 125M 0 125M 0% /dev/shm My virtual size is 4 GB, but the actual size is 3.5 GB. $ ls -hl total 7.5G -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi How much size should I give when compiling and installing a Linux kernel? Are there any guidelines to follow when doing this? This is my first time, so just experimenting with this.

    Read the article

  • Unable to access intel fake RAID 1 array in Fedora 14 after reboot

    - by Sim
    Hello everyone, 1st I am relatively new to linux (but not to *nix). I have 4 disks assembled in the following intel ahci bios fake raid arrays: 2x320GB RAID1 - used for operating systems md126 2x1TB RAID1 - used for data md125 I have used the raid of size 320GB to install my operating system and the second raid I didn't even select during the installation of Fedora 14. After successful partitioning and installation of Fedora, I tried to make the second array available, it was possible to make it visible in linux with mdadm --assembe --scan , after that I created one maximum size partition and 1 maximum size ext4 filesystem in it. Mounted, and used it. After restart - a few I/O errors during boot regarding md125 + inability to mount the filesystem on it and dropped into repair shell. I commented the filesystem in fstab and it booted. To my surprise, the array was marked as "auto read only": [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# and the partition in it was not available as device special file in /dev: [root@localhost ~]# ls -l /dev/md125* brw-rw---- 1 root disk 9, 125 Jan 6 15:50 /dev/md125 [root@localhost ~]# But the partition is there according to fdisk: [root@localhost ~]# fdisk -l /dev/md125 Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 19 heads, 10 sectors/track, 10281682 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b238ea9 Device Boot Start End Blocks Id System /dev/md125p1 2048 1953519615 976758784 83 Linux [root@localhost ~]# I tried to "activate" the array in different ways (I'm not experienced with mdadm and the man page is gigantic so I was only browsing it looking for my answer) but it was impossible - the array would still stay in "auto read only" and the device special file for the partition it will not be in /dev. It was only after I recreated the partition via fdisk that it reappeared in /dev... until next reboot. So, my question is - How do I make the array automatically available after reboot? Here is some additional information: 1st I am able to see the UUID of the array in blkid: [root@localhost ~]# blkid /dev/sdc: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sdd: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/md126p1: UUID="60C8D9A7C8D97C2A" TYPE="ntfs" /dev/md126p2: UUID="3d1b38a3-b469-4b7c-b016-8abfb26a5d7d" TYPE="ext4" /dev/md126p3: UUID="1Msqqr-AAF8-k0wi-VYnq-uWJU-y0OD-uIFBHL" TYPE="LVM2_member" /dev/mapper/vg00-rootlv: LABEL="_Fedora-14-x86_6" UUID="34cc1cf5-6845-4489-8303-7a90c7663f0a" TYPE="ext4" /dev/mapper/vg00-swaplv: UUID="4644d857-e13b-456c-ac03-6f26299c1046" TYPE="swap" /dev/mapper/vg00-homelv: UUID="82bd58b2-edab-4b4b-aec4-b79595ecd0e3" TYPE="ext4" /dev/mapper/vg00-varlv: UUID="1b001444-5fdd-41b6-a59a-9712ec6def33" TYPE="ext4" /dev/mapper/vg00-tmplv: UUID="bf7d2459-2b35-4a1c-9b81-d4c4f24a9842" TYPE="ext4" /dev/md125: UUID="b9a1149f-ae11-4fc8-a600-0d77354dc42a" SEC_TYPE="ext2" TYPE="ext3" /dev/sda: TYPE="isw_raid_member" /dev/md125p1: UUID="420adfdd-6c4e-4552-93f0-2608938a4059" TYPE="ext4" [root@localhost ~]# Here is how /etc/mdadm.conf looks like: [root@localhost ~]# cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md1 UUID=89f60dee:e46a251f:7475814b:d4cc19a9 ARRAY /dev/md126 UUID=a8775c90:cee66376:5310fc13:63bcba5b ARRAY /dev/md125 UUID=b9a1149f:ae114fc8:a6000d77:354dc42a [root@localhost ~]# here is how /proc/mdstat looks like after I recreate the partition in the array so that it becomes available: [root@localhost ~]# cat /proc/mdstat Personalities : [raid1] md125 : active raid1 sdc[1] sdd[0] 976759808 blocks super external:/md127/0 [2/2] [UU] md127 : inactive sdc[1](S) sdd[0](S) 4514 blocks super external:imsm md126 : active raid1 sda[1] sdb[0] 312566784 blocks super external:/md1/0 [2/2] [UU] md1 : inactive sdb[1](S) sda[0](S) 4514 blocks super external:imsm unused devices: <none> [root@localhost ~]# Detailed output regarding the array in subject: [root@localhost ~]# mdadm --detail /dev/md125 /dev/md125: Container : /dev/md127, member 0 Raid Level : raid1 Array Size : 976759808 (931.51 GiB 1000.20 GB) Used Dev Size : 976759940 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Update Time : Fri Jan 7 00:38:00 2011 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 30ebc3c2:b6a64751:4758d05c:fa8ff782 Number Major Minor RaidDevice State 1 8 32 0 active sync /dev/sdc 0 8 48 1 active sync /dev/sdd [root@localhost ~]# and /etc/fstab, with /data commented (the filesystem that is on this array): # # /etc/fstab # Created by anaconda on Thu Jan 6 03:32:40 2011 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-rootlv / ext4 defaults 1 1 UUID=3d1b38a3-b469-4b7c-b016-8abfb26a5d7d /boot ext4 defaults 1 2 #UUID=420adfdd-6c4e-4552-93f0-2608938a4059 /data ext4 defaults 0 1 /dev/mapper/vg00-homelv /home ext4 defaults 1 2 /dev/mapper/vg00-tmplv /tmp ext4 defaults 1 2 /dev/mapper/vg00-varlv /var ext4 defaults 1 2 /dev/mapper/vg00-swaplv swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost ~]# Thanks in advance to everyone that even read this whole issue :-)

    Read the article

  • df shows partition as full, but du shows it as only 25% full

    - by Jakobud
    I have a humble linux server to do some stuff for me. I only have an old 16gig drive in it. df -h Filesystem Size Used Avail Use% Mounted on /dev/md1 16G 16G 0 100% / /dev/md0 121M 14M 101M 13% /boot tmpfs 502M 0 502M 0% /dev/shm but when I do: du -sh / 3.5G / So one says that my 16G drive is full. The other says only 3.5G is used. Why the discrepancy? I cannot write any new files as it says the drive is indeed full. But if I can't find the files taking up all the space on the file-system, how can I delete them to free up space?

    Read the article

  • CentOS Insufficient space in download directory /var/cache/yum/base/packages

    - by Joao Heleno
    Hello! I was trying to yum install libpcap when I got Error Downloading Packages: 14:libpcap-0.9.4-15.el5.i386: Insufficient space in download directory /var/cache/yum/base/packages * free 0 * needed 108 k Here's output from df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 19G 0 100% / /dev/sda3 202G 38G 154G 20% /home tmpfs 1.5G 0 1.5G 0% /dev/shm And fdisk -l: Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux I have launched yum clean all with no success clearing up space. Please advise. Thanks.

    Read the article

  • How to add a new entry to fstab?

    - by Roei
    I mount a device mount /dev/xvdf /mnt/mongo and verify the mount using df-h: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 955M 6.9G 12% / tmpfs 299M 44K 299M 1% /dev/shm /dev/xvdf 20G 589M 19G 4% /mnt/mongo But now I'm trying to figure out how to make it auto mount on boot. I understand I need to add a new entry to /etc/fstab, so I perform: $ sed -i '$ a\/dev/xvdf /mnt/mongo xfs defaults 1 1' /etc/fstab But, after reboot, it seems that the auto mount didn't work. The device didn't appear in the df -h list. Should I not use the sed to add the entry? Is the entry I entered incorrect?

    Read the article

  • /tmp/ read-only

    - by Chirag
    When I'm trying delete some of the old eaccelerator files it gives me following errors rm: cannot remove `/tmp/eaccelerator/7/2/eaccelerator-0502.02065984': Read-only file system What can I do it fix it? Filesystem Size Used Avail Use% Mounted on /dev/sda2 226G 127G 88G 60% / /dev/sdc1 227G 102G 114G 48% /disk1 /dev/sda1 99M 18M 77M 19% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/sdb1 459G 182G 255G 42% /home4 /usr/tmpDSK 485M 325M 135M 71% /tmp That's my output from the server. Also what commands can I use to unmount and mount it? And should I do it while my web server is running?

    Read the article

  • No free disk space ;[

    - by skomak
    Hi I have weird situation because Linux df command says that there is no free disk space [root@backup cache]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 72G 70G 0 100% / /dev/sda1 190M 11M 170M 7% /boot tmpfs 248M 0 248M 0% /dev/shm but du -sh /* says [root@backup cache]# du -sh /* 4.0K /bacula-restores 7.4M /bin 5.4M /boot 3.6T /data 116K /dev 55M /etc 204K /home 76M /lib 16K /lost+found 12K /media 0 /misc 16K /mnt 8.0K /mount 0 /net 8.0K /opt 0 /proc 2.3G /root 32M /sbin 8.0K /selinux 168K /share 8.0K /srv 0 /sys 361M /test 20K /tmp 3.2G /usr 1.5G /var Could you tell me where is a problem? Where is my space? I can't figure it out :(

    Read the article

  • Root partition full? CentOS

    - by Joao Heleno
    Hi! I'm running CentOS 5.4 and my / is full. I wanted to install gparted but in order to do that I must install Priorities and it's when I get an error saying / is full so I can't go forward. Here's some output: fdisk -l Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 2611 20972826 83 Linux /dev/sda2 2612 3251 5140800 82 Linux swap / Solaris /dev/sda3 3252 30394 218026147+ 83 Linux df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 20315812 19365152 0 100% / /dev/sda3 211196248 49228164 151066780 25% /home tmpfs 1552844 0 1552844 0% /dev/shm I'm not using LVM. Please advise. Thanks

    Read the article

  • Webserver: Performance impact when storing session files on /dev/shm

    - by GetFree
    I have a website runing on a typical setup: Linux, Apache, PHP, MySQL. However, what's not typical about it, is that it's getting tons of traffic (400,000+ visits a day) and so, efficiency is becoming more and more important to me. I'm constantly looking for things I could optimize and, right now, my attention is focused on PHP's session files. There's a hell lot of session files constantly being read and created on the /tmp directory. So my question is: Is it a good idea to store the session files in /dev/shm (tmpfs) in order to speed things up a little bit??

    Read the article

  • How to determine if a CentOS system is Raid-1?

    - by Tedd Johnson
    I've tried searching for this answer, but haven't found anything elegant. I have numerous servers in a colo that is in another state. I need to find a way to check that the servers have RAID-1 on them, so that I can determine if they were setup correctly by my colo. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 442G 1.5G 418G 1% / /dev/sda1 99M 19M 75M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm however as CentOS uses LVM by default, this doesn't indicate if a RAID-1 is present. it is supposed to be a software raid, so I'm pretty sure there should be a way to check. Thanks

    Read the article

  • How can I view updatedb database content, and then exclude certain files/paths?

    - by rubo77
    The updatedb database on my debian server is quite slow. where is the database located and how can I view its content and find out if there are some paths with useless stuff, that I could add to the prunepaths? my /etc/updatedb.conf looks like this: ... # filesystems which are pruned from updatedb database PRUNEFS="NFS nfs nfs4 afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" export PRUNEFS # paths which are pruned from updatedb database PRUNEPATHS="/tmp /usr/tmp /var/tmp /afs /amd /alex /var/spool /sfs /media /var/backups/rsnapshot /var/mod_pagespeed/" ... and how can I prune all paths that contain */.git/* and */.svn/* ?

    Read the article

  • Increasing a Linux partition once VM size increased in vSphere?

    - by dannymcc
    I have a Ubuntu 12.04 VM running on VMWares ESXi 5.1. The server (VM) itself has run out of space, the results of df -h are as follows: Filesystem Size Used Avail Use% Mounted on /dev/sda1 19G 17G 1.2G 94% / udev 490M 4.0K 490M 1% /dev tmpfs 200M 232K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 498M 0 498M 0% /run/shm The original VM HDD size was just under 19GB which is I have now increased to 100GB within the vCenter GUI: Is there a simple way of doing this? The VM doesn't seem to acknowledge the increase at all.

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • centos 100% disk full - How to remove log files, history, etc?

    - by kopeklan
    mysqld won't start because disk space is full: 101221 14:06:50 [ERROR] /usr/libexec/mysqld: Error writing file '/var/run/mysqld/mysqld.pid' (Errcode: 28) 101221 14:06:50 [ERROR] Can't start server: can't create PID file: No space left on device running df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 3.2G 12G 23% / /dev/sda5 4.8G 4.6G 0 100% /var /dev/sda3 430G 855M 407G 1% /home /dev/sda1 76M 24M 49M 33% /boot tmpfs 956M 0 956M 0% /dev/shm du -sh * in /var: 12K account 56M cache 24K db 32K empty 8.0K games 1.5G lib 8.0K local 32K lock 221M log 16K lost+found 0 mail 24K named 8.0K nis 8.0K opt 8.0K preserve 8.0K racoon 292K run 70M spool 8.0K tmp 76K webmin 2.6G www 20K yp in /dev/sda5, there is website files in /var/www. because this is first time, I have no idea which files to remove other than moving /var/www to other partition And one more, what is the right way to remove log files, history, etc in /dev/sda5?

    Read the article

  • Can't mount home after trying to resize (bad geometry: block count exceeds size of device).

    - by Lynn
    This is on a fresh computer (super computer actually). It got to me with 15T on the home mount and 50G on the root. I tried allocating 7T to root and resizing (since I'm putting a local yum repo on this machine as it has no internet access nor will it ever). I tried following the instructions here: Centos 6.3 disk space allocation but something went wrong and the home won't mount again. Instead I get from dmesg | tail: EXT4-fs (dm-2): bad geometry: block count 4294967295 exceeds size of device (1342177280 blocks) df -h nets this output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 7.0T 3.6G 6.6T 1% / tmpfs 190G 216K 190G 1% /dev/shm /dev/sda1 485M 38M 422M 9% /boot I didn't have any files on /dev/mapper/VolGroup-lv_home. Will simply running mke2fs fix it to be mountable? What sort of options should I run it with. I've never resized volumes before or used mke2fs. I don't want to make this mess worse.

    Read the article

  • df command show no output

    - by user119720
    I'm running the linux distro on my server.When i want to verify the size of the disk, i'm issuing this commnand to get the output. df -h But it does not produce ANY output.Strangely enough when i'm issuing other command such as fdisk -l or du -h it can show output normally. Does anyone now why is this happening?Thanks. edit: here is the output of cat /etc/fstab none /dev/pts devpts rw 0 0 and this is for mount command none on /dev/pts type devpts (rw) none on /proc/sys/fs/binfmt_misc tpe binfmt_misc (rw) edit(2): here is the output of cat /proc/mounts /dev/vzfs / vzfs rw,relatime,usrquota,grpquota 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 none /dev/tmpfs rw,relatime 0 0 none /dev/pts devpts rw,relatime 0 0 none /proc/sys/fs/binfmt_misc binfmt_msc rw,relatime 0 0

    Read the article

  • /dev/shm (shared memory) on linux

    - by Kirzilla
    Hello, Let's imagine that we have 8Gb of RAM on server. I'm mounting /dev/shm with 4Gb on board. mount -o remount,size=4G /dev/shm Will this memory be strictly reserved for shared memory or if /dev/shm is empty this memory could be used by regular applications (web server, php etc.)? PS:Sorry for my English. I'm asking it because I've just checked df -h and found tmpfs 6.0G 0 6.0G 0% /dev/shm on 8Gb RAM sever. I don't know who made this setup, but it seems to me awful. Thank you!

    Read the article

  • Resize the /var directory in redhat enterprise edition 4

    - by Sri
    I am running NDB mysql. the log files fills up the /var directory. therefore i cant start the ndbd service now. as a temporary fix, i have deleted the log files and again working fine. but again the log files fill up the /var directory. i got plenty of space in other partition. therefore i would like to swap the partition from one directory to /var. here if my input from df -h Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 54G 2.9G 49G 6% / /dev/cciss/c0d0p1 ext3 99M 14M 81M 14% /boot none tmpfs 1013M 0 1013M 0% /dev/shm /dev/cciss/c0d0p2 ext3 9.7G 9.7G 0 100% /var there are plenty of space in /dev/mapper/VolGroup00-LogVol00. Therefore i will like to swap 10 G space from this directory to /var. could you please help me out to solve this problem?

    Read the article

  • Volume expanded in Volume Group, old disk reduced but still in use in system

    - by Yurij73
    Tryed to add a new hard sdb (not formated) to my virtualbox Centos. Successfully extended an existing vg_localhost to /dev/sdb/ # lvdisplay --- Logical volume --- LV Path /dev/vg_localhost/lv_root LV Name lv_root VG Name vg_localhost LV UUID DkYX7D-DMud-vLaI-tfnz-xIJJ-VzHz-bRp3tO LV Write Access read/write LV Creation host, time localhost.centos, 2012-12-17 LV Status available # open 1 LV Size 18,03 GiB Current LE 4615 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 20G 0 disk +-vg_localhost-lv_root (dm-0) 253:0 0 18G 0 lvm / +-vg_localhost-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP] sda 8:0 0 9G 0 disk +-sda1 8:1 0 500M 0 part /boot +-sda2 8:2 0 8,5G 0 part sr0 11:0 1 1024M 0 rom df -h /dev/mapper/vg_localhost-lv_root 6,5G 6,2G 256M 97% / tmpfs 499M 200K 499M 1% /dev/shm /dev/sda1 485M 78M 382M 17% /boot it still old sda in use, what i have to do further?

    Read the article

  • How to Extending a logical volume in WMWare

    - by Mercer
    down vote favorite i have a CentOS 6.3 into my Virtual Machine. I have 2 Disk: Disk#1 = 18G Disk#2 = 20G [root@vm ~]# df -h Filesystem Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 1008M 250M 708M 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 31M 154M 17% /boot /dev/mapper/vg_system-lv_home 504M 17M 462M 4% /home /dev/mapper/vg_system-lv_opt 2.0G 68M 1.9G 4% /opt /dev/mapper/vg_produits-lv_grid 6.9G 2.5G 4.1G 38% /opt/grid /dev/mapper/vg_produits-lv_oracle 6.9G 144M 6.4G 3% /opt/oracle /dev/mapper/vg_system-lv_tmp 2.8G 71M 2.6G 3% /tmp /dev/mapper/vg_system-lv_usr 2.5G 1.6G 799M 67% /usr /dev/mapper/vg_system-lv_var 2.0G 278M 1.6G 15% /var So i want to extend my /tmp and my /opt/oracle like this: 10Go in/tmp 13Go in /opt/oracle Thx.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11  | Next Page >