Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 38/92 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Redis 2.0.3 would not let go of deleted appendonly.aof file after BGREWRITEAOF

    - by Alexander Gladysh
    Ubuntu 10.04.2, Redis 2.0.3 (more details at the end of the question). My AOF file for Redis is getting too large, to the point where it soon would threaten to take whole free disk space on my small-HDD VPS box: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / $ ls -la total 3866688 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:11 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-r----- 1 redis redis 3923246988 2011-03-02 00:14 appendonly.aof -rw-rw---- 1 redis redis 32356467 2011-03-02 00:11 dump.rdb When I run BGREWRITEAOF, the AOF file shrinks, but disk space is not freed: $ ls -la total 95440 drwxr-xr-x 2 redis redis 4096 2011-03-02 00:17 . drwxr-xr-x 29 root root 4096 2011-01-24 15:58 .. -rw-rw---- 1 redis redis 65137639 2011-03-02 00:17 appendonly.aof -rw-rw---- 1 redis redis 32476167 2011-03-02 00:17 dump.rdb $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda 32G 24G 6.7G 78% / Sure enough, Redis is still holding the deleted file: $ sudo lsof -p6916 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... redis-ser 6916 redis 7r REG 202,0 3923957317 918129 /var/lib/redis/appendonly.aof (deleted) ... redis-ser 6916 redis 10w REG 202,0 66952615 917507 /var/lib/redis/appendonly.aof ... How can I workaround this issue? I can restart Redis this time, but I would really like to avoid doing this on a regular basis. Note that I can not upgrade to 2.2 (upgrade to 2.0.4 is feasible though). More information on my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.2 LTS Release: 10.04 Codename: lucid $ uname -a Linux my.box 2.6.32.16-linode28 #1 SMP Sun Jul 25 21:32:42 UTC 2010 i686 GNU/Linux $ redis-cli info redis_version:2.0.3 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:32 multiplexing_api:epoll process_id:6916 uptime_in_seconds:632728 uptime_in_days:7 connected_clients:2 connected_slaves:0 blocked_clients:0 used_memory:65714632 used_memory_human:62.67M changes_since_last_save:8398 bgsave_in_progress:0 last_save_time:1299014574 bgrewriteaof_in_progress:0 total_connections_received:17 total_commands_processed:55748609 expired_keys:0 hash_max_zipmap_entries:64 hash_max_zipmap_value:512 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=1,expires=0 db1:keys=18,expires=0

    Read the article

  • Debian Squeeze Linux 9p virtfs guest mount failure

    - by Tero Kilkanen
    First some background information on the server: Host OS: Debian Linux Squeeze + qemu-kvm version 1.0+dfsg-8~bpo60+1 Guest OS: Debian Linux Squeeze I use qemu-kvm via libvirt. I have set up 9p VirtFS with the following in Guest's XML config: <filesystem type='mount' accessmode='passthrough'> <source dir='/srv/www'/> <target dir='wwwdata'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </filesystem> That is, I want to share /srv/www to the guest OS using mount tag wwwdata. When I try to mount the VirtFS share from the guest, I get an error message: root@server:~# mount -t 9p -o trans=virtio,version=9p2000.L2 wwwdata /srv/www/ mount: wwwdata: can't read superblock I also tried virtfs target dir/mount_tag www at first. I got the same error message. However, I was able to mount the VirtFS share using mount tag www1111, or www1 or similar. Some more notes on this one. dmesg doesn't show anything useful either in guest or the host. The only sign is this entry in the guest dmesg: [ 36.054936] Installing v9fs 9p2000 file system support Does anyone know how to get this working correctly? Google gives no useful information on this issue; I've tried several searches.

    Read the article

  • Find out the type of an automounted device

    - by Steve Bennett
    I'm working on a system (Ubuntu Precise) with a mount defined in /etc/fstab as follows: /dev/vdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 Originally I just wanted to find out if it's NFS (due to potential MySQL locking issues). Judging from man mount, it's not: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled "nodev" (e.g., devpts, proc and nfs). If /etc/filesystems ends in a line with a single * only, mount will read /proc/filesystems afterwards. But, out of curiosity now, how can I find out more about what type of device it actually is? (For context, this is a VM running on OpenStack. The device is a 60Gb allocation mounted from somewhere - but I don't know how.)` EDIT Including answers here: $ mount /dev/vdb on /mnt type ext3 (rw,_netdev) $ df -T /dev/vdb ext3 61927420 2936068 55845624 5% /mnt

    Read the article

  • Is there any limit to AIX 5.3 pipe size ?

    - by snowflake
    Hello, I'm in trouble while performing cat/tail/head operation on large files on Aix 5.3. When asking for a cat of several 1Go file redirected to another one: cat file1 file2 file3 > outputfile The outputfile is limited to 2Go (cat: output error and result file is 2147483647 bytes) Filesystem is jfs2. I successfully uploaded through ftp 10Go files on the filesystem without problem. I found nothing relevant in etc/security/limits: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a core file size (blocks) unlimited data seg size (kbytes) 245759 file size (blocks) unlimited max memory size (kbytes) unlimited open files 2000 pipe size (512 bytes) 64 stack size (kbytes) 32768 cpu time (seconds) unlimited max user processes 2048 virtual memory (kbytes) 278527 The problem does not occur on another AIX 5.3 server, I'm just looking for a different configuration that might be the source of the problem. /etc/security/limits on the server without the problem: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a on the server without the problem: core file size (blocks, -c) 1048575 data seg size (kbytes, -d) 131072 file size (blocks, -f) unlimited max memory size (kbytes, -m) 32768 open files (-n) 20000 pipe size (512 bytes, -p) 64 stack size (kbytes, -s) 32768 cpu time (seconds, -t) unlimited max user processes (-u) 262144 virtual memory (kbytes, -v) unlimited

    Read the article

  • Accidentally mounted a ReiserFS drive as MBR on my windows box - how do I recover?

    - by Ryan
    I had a WD Netcenter with a 160GB drive that kept dropping off the network. I opened up the enclosure and removed the hard drive, connected to a Windows box without knowing the drive used ReiserFS.... When mounting on the Windows box, I chose "MBR" as filesystem. 70GB of data corrupted: 90% of data is word documents, excel spreadsheets, and jpg's - all mission critical. Attempted recovery on Linux box (ubuntu) using TestDisk: I could see the container, but couldn't get anything out – according to TestDisk this was because I chose "none" as filesystem. Attempted recovery using Nucleus Kernel Recovery for windows: 98% of what was recovered is incomplete and/or unusable. I need to know if a way exists to recover or rebuild original ReiserFS MBR, or what tools/techniques might give me the best results in recovering the data. Found a Windows version of TestDisk and I ran it yesterday - here are the results: TestDisk 6.14-WIP, Data Recovery Utility, May 2012 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 160 GB / 149 GiB - CHS 19457 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 519 GB / 483 GiB) Check the harddisk size: HD jumpers settings, BIOS detection... The following partitions can't be recovered: Partition Start End Size in sectors > ReiserFS 3.6 62 241 8 19458 0 18 311581568 ReiserFS 3.6 62 248 55 19458 8 2 311581568 ReiserFS 3.6 62 254 37 19458 13 47 311581568 ReiserFS 3.6 63 6 28 19458 20 38 311581568 ReiserFS 3.6 63 13 11 19458 27 21 311581568 ReiserFS 3.6 63 21 43 19458 35 53 311581568 ReiserFS 3.6 63 27 41 19458 41 51 311581568 ReiserFS 3.6 63 37 35 19458 51 45 311581568 ReiserFS 3.6 63 54 20 19458 68 30 311581568 ReiserFS 3.6 63 76 26 19458 90 36 311581568

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • why is rdiff-backup not compatible with encfs ---reverse

    - by user330273
    I'm trying to use encfs with rdiff-backup to ensure that my backups to a remote server are encrypted. The easiest way to do this would be to use encfs --reverse - which means encfs will create a virtual encrypted file system, which I can then backup using rdiff-backup. Except that it doesn't work. Rdiff-backup fails every time with an "input/output error" on the encfs virtual filesystem. It seems I'm not the only one with this problem, but no one has said what the problem is: this person reported the same issue, but was just told to use sshfs instead (see below on that); in this question on serverfault, one of the answers just states that "rdiff-backup seems to have trouble accessing the EncFS-reverse filesystem." There's an open bug report on the Debian bug tracker(bug 731413, I can't post the link) on this bug, but it's been open since December 2013 with no response. Does anyone know what the problem actually is? Is there a workaround? I can't use the two most commonly suggested alternatives - sshfs and then running encfs on that, or using Duplicity - as both require a much higher bandwidth connection than I have access to (Duplicity requires regular full backups).

    Read the article

  • Zenoss No space left on device Error

    - by Pastelinux
    Site Error An error was encountered while publishing this resource. Sorry, a site error occurred. Traceback (innermost last): Module ZPublisher.Publish, line 231, in publish_module_standard Module ZPublisher.Publish, line 165, in publish Module Zope2.App.startup, line 211, in __call__ Module Products.ZenUI3.browser, line 105, in __call__ Module Products.Five.browser.pagetemplatefile, line 60, in __call__ Module zope.pagetemplate.pagetemplate, line 115, in pt_render Module zope.tal.talinterpreter, line 271, in __call__ Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 858, in do_defineMacro Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 533, in do_optTag_tal Module zope.tal.talinterpreter, line 518, in do_optTag Module zope.tal.talinterpreter, line 513, in no_tag Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 620, in do_insertText_tal Module Products.PageTemplates.Expressions, line 203, in evaluateText Module Products.PageTemplates.Expressions, line 222, in _handleText Module zope.component._api, line 174, in queryUtility Module zope.component.registry, line 165, in queryUtility Module ZODB.Connection, line 834, in setstate Module ZODB.Connection, line 884, in _setstate Module ZEO.ClientStorage, line 815, in load Module ZEO.cache, line 143, in call Module ZEO.cache, line 607, in store IOError: [Errno 28] No space left on device Went in to check my server through zenoss today and it looks like somehow my server is full. Which when i look at my server its only 85% full: unclebob:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 1.9G 1.5G 335M 82% / tmpfs 471M 0 471M 0% /lib/init/rw udev 10M 820K 9.2M 9% /dev tmpfs 471M 0 471M 0% /dev/shm overflow 1.0M 1.0M 0 100% /tmp /dev/hde1 942M 36M 859M 5% /boot unclebob:/tmp# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 121920 54844 67076 45% / tmpfs 120489 3 120486 1% /lib/init/rw udev 120489 1520 118969 2% /dev tmpfs 120489 1 120488 1% /dev/shm overflow 120489 14 120475 1% /tmp /dev/hde1 61312 33 61279 1% /boot It looks like theres these two files: .ICE-unix/ .X11-unix/ They had been hidden. I'll remove those. Any idea upon what they maybe? Any ideas on a fix? Probably has something to do with Zenoss

    Read the article

  • Make case-sensitive SMB share case-insensitive

    - by fungs
    I am running a legacy XP app that I would like to move on a network share. It is very simple and works in theory but the server providing the share is based on Linux (cannot configure) and the software does not work correctly because it is programmed case-insensitively, it seems. After some research, network shares behave like the filesystem they use underneath. This is normal. Unfortunately I cannot fix the software myself. Is there any way to turn the case-sensitivity into case-insensitivity for a Windows network drive on the client side? I fould two approaches: First, something like icasefile (http://wnd.katei.fi/icasefile/) that wraps around the program and intercepts the file I/O. This is for UNIX only. Secondly, a proxy virtual file system (e. g. something using Dokan). Unfortunately I couldn't find any suitable fs, the only possibility would be to put a case-insensitive filesystem on an image file and put this on the share using for example lmdisk (http://www.ltr-data.se/opencode.html/#ImDisk). Do you have any better ideas?

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    Hello, I have a vmware esxi 4 server and 2 storage servers (mount as nfs). Between the storage servers (fedora 14) is made drbd cluster (dual primary) and ocfs2 filesystem, also every server has local partition with ext4 filesystem, both are mounted as nfs on esxi server. When i tried to copy a virtual machine (naturally it power off) files from ext4 partition to ocfs2 partition, vmdk total file size is different, but md5sum is the same. on ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 on ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When i power on the virtual machine from ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after windows logo. From ext4 partition the virtual machine is worked. Test with linux (create and install on ext4 partition and copy) the same problem appears. When i create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and i have the same problem. Any suggestions ?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • How to use WPA2 client mode in the Linux-based Cisco WAP4410N access point

    - by joechip
    I have a Cisco WAP4410N access point that I want to use as a client to connect to a WPA2 wireless network (for WLAN service monitoring purposes). Supposedly this access point supports a "Wireless Client/Repeater" mode that allows to do this. The Repeater function is optional (I have that box unchecked so that nobody can connect to this access point wirelessly). I have verified through SSH that the access point gets configured as a client and not as a Master. But it never associates to the SSID I ask it to. This is what iwconfig shows: ath04 IEEE 802.11ng ESSID:"myownssid" Mode:Managed Channel:0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:14 dBm Sensitivity=1/3 Retry:off RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=0/94 Signal level=161/162 Noise level=161/161 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Although I've never done this from the command line, I suppose I could use wpa_supplicant or wpa_client to associate it, but I don't know how to do that without editing configuration files and the filesystem is readonly. Besides, I would have to run those commands manually after every reboot. I'd like to know how to do this the Cisco way, if possible. If not, any trick to make this work would be useful. Edit: This is with the latest firmware, 2.0.4.2. And I found that not all of the filesystem is readonly, since /var and /tmp are mounted with type ramfs.

    Read the article

  • Drobo-like linux file server - how do I do it?

    - by John Hunt
    I've been pondering for a long time about how I can set up a server which operates much like the Drobo storage thing. The reasons I don't actually want a drobo is because I've heard scare stories, plus I'd like to do this on the cheap. So ideally I'm looking for something like lvm so I can create a logical volume that spans many hard disks of varying sizes... obviously that only offers redundancy if I put the LV on a raid array (as far as I know..) I have however been reading about technologies such as Microsoft's drive extender which duplicates files at the filesystem level and makes sure that the mirrored files are on a different phyiscal disk.. does anyone know or recommend a filesystem or method like this as it'll hopefully make much better use of the space available than raid ever could. Performance isn't an issue, I'd just really like to make the most of the hard disks I have lying around whilst having a bit of redundancy incase a disk dies. I understand full well that this is no replacement for a backup, but I'll only be storing files of medium importance and using the nas itself as a backup of my main pc and other systems. Thanks in advance! I'm hoping zfs or btrfs or something can do something clever for me :)

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • Centos INODES usage

    - by MSTF
    We are using Centos & cPanel server but we have a important problem for INODES usage. "df -i" command showing for / directory using 6 million inodes!. When I check number of files for / directory, it has few thousand files. df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda4 6578176 6567525 10651 100% / tmpfs 8238094 1 8238093 1% /dev/shm /dev/sdi1 61054976 169 61054807 1% /backup /dev/sda1 51296 38 51258 1% /boot /dev/sda2 0 0 0 - /boot/efi /dev/sdc1 7290880 1252 7289628 1% /database /dev/sdb2 4096000 53258 4042742 2% /home /dev/sdd1 7290880 3500 7287380 1% /home2 /dev/sde1 7290880 68909 7221971 1% /home3 /dev/sdg1 7290880 68812 7222068 1% /home5 /dev/sdh1 7290880 695076 6595804 10% /home6 /dev/sdf1 7290880 58658 7232222 1% /tmp df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 99G 30G 65G 32% / tmpfs 32G 0 32G 0% /dev/shm /dev/sdi1 917G 270G 601G 32% /backup /dev/sda1 788M 80M 669M 11% /boot /dev/sda2 400M 296K 400M 1% /boot/efi /dev/sdc1 110G 1.5G 103G 2% /database /dev/sdb2 62G 1.1G 58G 2% /home /dev/sdd1 110G 79G 26G 76% /home2 /dev/sde1 110G 3.9G 101G 4% /home3 /dev/sdg1 110G 51G 54G 49% /home5 /dev/sdh1 110G 64G 41G 62% /home6 /dev/sdf1 110G 611M 104G 1% /tmp SDA disk just have Operating System and cPanel. There is no account, database, tmp on SDA disk. Why SDA using high inodes? Note: All disks is SSD 120GB Thanks.

    Read the article

  • Disk Partitioning problem with fdisk.

    - by MA1
    Currently i am using fdisk to create/resize windows partitions. Following is a sample input script to fdisk to create/resize windows partitions: fdisk /dev/sda < partInput the contents of partInput are as follows: d #delete the partition 3 #partition number to be deleted n #add a new partition p #primary: type of new partition 3 #new partition number 18804 #start cylinder of new partition 77433 #end cylinder of new partition t #change the type of partition 3 #partition number whose type(filesystem) is to be changed 7 #HPFS/NTFS: partition type(filesystem) n #add a new partition p #primary: type of partition 77434 #first cylinder of new partition 77825 #end cylinder new partition w #write all the above changes As you see in the above input we are using cylinders for start and end. Earlier i am using sectors as unit and everything is working fine but getting problems when partitioning a 1.5TB hard drive. Then i changed the unit to cylinders but it is working on some machines not all. On some machines fdisk failed to create the partition table correctly. So, i am thinking to move to parted if there is no way to do the above using fdisk. Please also tell me how to correctly convert sectors to cylinders? How to perform all the above steps using parted without losing the data OR how to use fdisk correctly?

    Read the article

  • Determining the Source of a Given File System Mount on Unix [migrated]

    - by phobos51594
    Background Recently I have run into a bit of a snag on my home FreeBSD server. I recently upgraded it to the latest stable release, and I have noticed some strange behavior with the /var partition. Originally, I had the system configured such that /var had its own partition with /var/run and /var/log in memory disks (/tmp, too). After the upgrade, I notice there is a new, fourth memory disk mounting directly to /var that I had not set up manually and is not in my fstab. It is only 28 megs or so in size and is causing problems when trying to update my ports collection. The ramdisk mounts atuomagically at boot and cannot be unmounted while in multi-user mode. If I drop to single user mode, I am able to unmount it without issue, however rebooting causes it to pop right back up. System specifications have been included at the end of the post. Question Is there any way to determine exactly what is mounting a given memory disk (or any filesystem, for that matter) after it has been mounted? Alternately, does anybody have any ideas what might have caused the new /var ramdisk to pop up? System Specification # uname -a FreeBSD sarge 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0: Thu Nov 22 14:02:13 PST 2012 donut@sarge:/usr/obj/usr/src/sys/GENERIC i386 # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/da0s1a 515612 410728 63636 87% / devfs 1 1 0 100% /dev /dev/da0s1d 515612 287616 186748 61% /var /dev/da0s1e 6667808 2292824 3841560 37% /usr /dev/md0 63004 32 57932 0% /tmp /dev/md1 3484 8 3200 0% /var/run /dev/md2 31260 8 28752 0% /var/log /dev/md3 31260 512 28248 2% /var <-- This # cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/da0s1a / ufs rw,noatime 1 1 /dev/da0s1d /var ufs rw,noatime 2 2 /dev/da0s1e /usr ufs rw,noatime 2 2 md /tmp mfs rw,-s64M,noatime 0 0 md /var/run mfs rw,-s4M,noatime 0 0 md /var/log mfs rw,-s32M,noatime 0 0 Thank you in advance for any assistance.

    Read the article

  • RAID6 mdraid -> LVM -> EXT4 root with GRUB2?

    - by Rotonen
    2012-03-31 Debian Wheezy daily build in VirtualBox 4.1.2, 6 disk devices. My steps to reproduce so far: Setup one partition, using the entire disk, as a physical volume for RAID, per disk Setup a single RAID6 mdraid array out of all of those Use the resulting md0 as the only physical volume for the volume group Setup your logical volumes, filesystems and mount points as you wish Install your system Both / and /boot will be in this stack. I've chosen EXT4 as my filesystem for this setup. I can get as far as GRUB2 rescue console, which can see the mdraid, the volume group and the LVM logical volumes (all named appropriately on all levels) on it, but I cannot ls the filesystem contents of any of those and I cannot boot from them. As far as I can see from the documentation the version of GRUB2 shipped there should handle all of this gracefully. http://packages.debian.org/wheezy/grub-pc (1.99-17 at the time of writing.) It is loading the ext2, raid, raid6rec, dosmbr (this one is in the list of modules once per disk) and lvm modules according to the generated grub.cfg file. Also it is defining the list of modules to be loaded twice in the generated grub.cfg file and according to quick Googling around this seems to be the norm and OK for GRUB2. How to get further by getting GRUB2 to actually be able to read the content of the filesystems and boot the system? What am I wrong about in my assumptions of functionality here? EDIT (2012-04-01) My generated grub.cfg: http://pastie.org/3708436 It seems it first makes my /usr logical volume the root and that might be source of the failure? A grub-mkconfig bug? Or is it supposed to get access to stuff from /usr before / and /boot? /boot is on / for me - no separate boot logical volume.

    Read the article

  • 2011 i7 Macbook Pro unable to boot from any Windows CD?

    - by Craig Otis
    I'm encountering issues installing Windows alongside my Lion install. I'm attempting to install from the internal SuperDrive, after using Boot Camp to partition what was a single, HFS+ volume. When holding down Option at boot, the CD appears in the startup list, but upon selecting it, I get a gray screen for 5 minutes, then a flashing white folder. I tried installing rEFIt and using this to boot the CD, but I receive an error about "Not Found" being returned from the "LocateDevicePath", and a mention of the firmware not supporting booting using legacy methods. In the Console, when opening the StartupDisk preference pane (which never presents the CD as a selectable option), I see: 11/25/11 4:39:31.159 PM System Preferences: isCDROM: 0 isDVDROM:1 11/25/11 4:39:31.159 PM System Preferences: mountable disk appeared: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.214 PM System Preferences: - So far so good, passing disk to System Searcher. 11/25/11 4:39:33.218 PM System Preferences: OSXCheck: No boot.efi in System Folder or volume root. 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD 11/25/11 4:39:33.220 PM System Preferences: WinCheck: Not a valid windows filesystem: /Volumes/GRMCPRFRER_EN_DVD I'm at a loss here. I've done my research, but it sounds like most of the rEFIt errors of this nature are caused by installing from a thumbdrive, or an external drive. I'm using the internal SuperDrive. Also, I've tried this with two different disks: A Windows XP SP2 CD A Windows 7 x86 DVD Both are disks I've had around for years, and I've used them reliably in the past. The system is an early 2011 15" Macbook Pro, all firmware updates installed.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • RHEL Java Application returns "No space left on device" but only 3% used

    - by FiveO
    My Java Application returns following Exception when saving a new file in /opt/wso2 on a CentOS 6.4: Caused by java.io.FileNotFoundException: ... (No space left on device) Caused by: java.io.FileNotFoundException: /opt/wso2/FrameworkFiles/trk_2014062500042488825_TRCK_PatfallHospis_pFromHospis_66601fb3-a03c-4149-93c3-6892e0a10fea.txt (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:99) at com.avintis.esb.framework.adapter.wso2.FrameworkAdapterWSO2.sendMessages(FrameworkAdapterWSO2.java:634) ... 23 more But when I run df -a I can see that the partition still has plenty of space available: [root@stzsi466 wso2]# df -a Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_stzsi466-lv_root 12054824 2116092 9326380 19% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys devpts 0 0 0 - /dev/pts tmpfs 4030764 0 4030764 0% /dev/shm /dev/sda1 495844 53858 416386 12% /boot /dev/sdb1 51605436 1424288 47559744 3% /opt/wso2 none 0 0 0 - /proc/sys/fs/binfmt_misc [root@stzsi466 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_stzsi466-lv_root 765536 45181 720355 6% / tmpfs 1007691 1 1007690 1% /dev/shm /dev/sda1 128016 44 127972 1% /boot /dev/sdb1 3276800 6137 3270663 1% /opt/wso2 What is the problem here? Is it caused by the Java on CentOS 6.4? I have another server running Redhat REHL 6.4 and all works fine - same Java etc. Does anyone know of this problem?

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >