Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 11/101 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How do I convert a Linux disk image into a sparse file?

    - by endolith
    I have a bunch of disk images, made with ddrescue, on an EXT partition, and I want to reduce their size without losing data, while still being mountable. How can I fill the empty space in the image's filesystem with zeros, and then convert the file into a sparse file so this empty space is not actually stored on disk? For example: > du -s --si --apparent-size Jimage.image 120G Jimage.image > du -s --si Jimage.image 121G Jimage.image This actually only has 50G of real data on it, though, so the second measurement should be much smaller. This supposedly will fill empty space with zeros: cat /dev/zero > zero.file rm zero.file But if sparse files are handled transparently, it might actually create a sparse file without writing anything to the virtual disk, ironically preventing me from turning the virtual disk image into a sparse file itself. :) Does it? Note: For some reason, sudo dd if=/dev/zero of=./zero.file works when cat does not on a mounted disk image.

    Read the article

  • Removing duplicate files, keeping only the newest file

    - by pinkie_d_pie_0228
    I'm trying to clean up a photo dump folder, in which several files are duplicated but with different filenames or lost in subfolders. I've looked at tools like rmlint, duff and fdupes, but I can't seem to find a way to have them keep only the file with the most recent timestamp. I suspect I have to postprocess the results, but I don't even know where to start to do this. Can anyone guide me on how to get the duplicate files list and delete everything but the newest file?

    Read the article

  • Opinions on NTFS for Mac solution?

    - by AngryHacker
    I am currently using the free NTFS-3G to access my NTFS drive from the Mac. It seems pretty stable (except once in the very beginning, it locked up the Mac and corrupted my NTFS drive, which I then fixed with chkdsk from a PC). However, speed is NOT one of its virtues. In fact, it's painfully slow I've been looking at buying Paragon NTFS for Mac OS X 8.0. Their product comparison PDF claims nearly double the speed (vs NTFS-3G) in almost every category (read, write, etc...). In addition, there is now an unsupported native solution with the snow leopard. Can folks here share their experiences. Is the native solution stable enough to be used for daily work? Should I just go with Paragon?

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • gpfs: adding a new nsd server to a cluster

    - by alessandra
    I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks. I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the cluster so that the disks are available to all the nodes. Node B is connected to the cluster via ethernet. How can I make the new nsd seen by all the nodes of the cluster? I can create the new nsd but when trying to create the filesystem on node B it the command mmcrfs times out. It looks like the nodes of the cluster cannot understand the filesystem location even if I specify them attached to server B in the description file. Would it be better to remove node B from the cluster, create a cluster on its own with its attached filesystem and connect it remotely with the previous cluster? Or a clustered NFS solution would apply better? Can you please give me any suggestion?

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Linux: Case-INSENSITIVE Filesystem

    - by Quandary
    What methods are there to make the Linux filesystem case-INSENSITIVE ? I have asp.net applications developed on Windows, but there are always issues with capitalization/spelling on mono when putting it on Linux. One way is to mount a localhost SMB share to /var/www. Are there any others ?

    Read the article

  • Mac OS X - rmdir fails with "Operation not permitted" for a folder created by a PC on a removable dr

    - by maxint
    Hello. I have a problem (using Mac OS X 10.5.8) with the access rights of a folder that was presumably created by a virus on a disk-on-key drive when I used it with a PC. I can't remove the folder or change it's name. In Finder's Info window the Lock box is unchecked and uncheckable - if I try to check it it flips back to off. Please see the details: MaxBookAir:GARMIN'S maxint$ rmdir winamp_cache_0001/ rmdir: winamp_cache_0001/: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ mv winamp_cache_0001 test mv: rename winamp_cache_0001 to test: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ GetFileInfo winamp_cache_0001 directory: "/Volumes/GARMIN'S/winamp_cache_0001" attributes: avbstclinmedz created: 12/23/2009 14:34:52 modified: 02/13/2010 22:52:36 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -x winamp_cache_0001 File: "winamp_cache_0001" Size: 32768 FileType: Directory Mode: (0777/drwxrwxrwx) Uid: ( 502/ maxint) Gid: ( 20/ staff) Device: 14,5 Inode: 7439 Links: 1 Access: Wed Dec 23 00:00:00 2009 Modify: Sat Feb 13 22:52:36 2010 Change: Sat Feb 13 22:52:36 2010 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -r winamp_cache_0001 234881029 7439 040777 1 502 20 0 32768 1261506600 1266081756 1266081756 1261559092 131072 64 32768 winamp_cache_0001 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ ls -lTd winamp_cache_0001/ drwxrwxrwx 1 maxint staff 32768 Feb 13 22:52:36 2010 winamp_cache_0001/ MaxBookAir:GARMIN'S maxint$

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • What's the fastest filesystem for developer builds?

    - by Dan Fabulich
    I'm putting together a Linux box that will act as a continuous integration build server; we'll mostly build Java stuff, but I think this question applies to any compiled language. What filesystem and configuration settings should I use? (For example, I know I won't need atime for this!) The build server will spend a lot of time reading and writing small files, and scanning directories to see which files have been modified.

    Read the article

  • How to only allow particular programs to modify certain files?

    - by Mehrdad
    I want to make certain directories on my drives read-only except to particular programs who will have full permissions. For example, the Microsoft Word might be allowed to modify the files in my Documents folder, but other programs (such as the Command Prompt) would not be allowed to. I'm guessing this requires a file system filter driver of some sort, but I don't know which programs have this capability. Is there any (free) program that can do this for me?

    Read the article

  • FAT filesystem analysis tool

    - by Andy
    I have a dump a FAT file system. Is there a windows tool I can use to analyse it, including: Provide basic information (sector size etc.) Validate the file system, basic corruption checking Allow the files and directory structure to be viewed and possibly edited (i.e mounting as a windows partition) Thanks, Andy

    Read the article

  • Best alternatives to recover lost directories in FAT32 external hard drive?

    - by Sergio
    Hi: I have an 320 GB ADATA CH91 external hard drive. I guess it has some problems with the connector of the USB jack. The point is that in certain occasions it fails in write operations generating data losses. Right now I lost a directory with several GB's of very useful information. Since then I have not attempted to write to the disk any more. What tool would you recommend to recover the lost data? The disk is FAT32 formatted (only one partition) and I use both Linux and Windows. What filesystem format would you recommend to avoid future data losses? I currently only use this external hard drive in Linux so there are several available choices (FAT, NTFS, ext3, ext4, reiser, etc.). Regards, Sergio

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • How do I debug this FS error on a flash device?

    - by abc
    I have console access to an embedded linux device. This device has flash memory part of which is partitioned as a FAT filesystem. Its running linux-2.6.31. However I am seeing these errors on the console these days and the FAT file system becomes read only. 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) I cannot understand why this happened? What is the root cause? And what is the fix? I would appreciate answers that can point me how to investigate the possible root cause of this issue on the device.

    Read the article

  • What is the meaning of those numbers in the second column after typing "ls -l"?

    - by Nick Dong
    drwxr-xr-x. 2 root root 4096 Jun 29 16:44 db drwxr-xr-x. 2 root root 4096 Jun 29 16:44 djproject -rwxr-xr-x. 1 root root 38 Jun 29 16:44 index.html drwxr-xr-x. 2 root root 4096 Jun 29 16:44 jobs -rwxr-xr-x. 1 root root 252 Jun 29 16:44 manage.py drwxr-xr-x. 3 root root 4096 Jun 29 16:44 templates What is the meaning of those numbers in the second column? Do they have some relation to file and folder permissions? How do I change the numbers?

    Read the article

  • How can you make a Windows USB HDD Modify All for All Users

    - by David Allan Finch
    Hi, I use a USB HDD a lot between lots of different Windows Boxes. What I find after a while is that there get to be lots of different Permission on the files in some cases stopping me looking at files or removing them. They want Admin rights or even sometimes you need to put the disk back into the original machine with the original user. This is a right pain. Is there away of making the disk have Modify All for All Users and making this the default for all files on the disk. Thanks

    Read the article

  • Why does the file date always change to the current date?

    - by Marshall
    We are a programming shop, but this i snot a programming question. My boss has put an external HD on the network. It contains the 'home' folders for users on the network. He uses it to place VB projects that he wants me to work on. But no matter what date and time he places a project on the drive, the file dates(modified) always shows the current date, though nothing in the files have changed. It makes it very hard to confirm that he has given me the latest versions. (He is not a fan of version control and nothing I do will convince him different.) Any ideas why this happens and how to prevent it from happening? P.S. As I wrote this I decided to add the last accessed date to the file display, and those dates happen to show the dates I expect to see. Why is the modified date getting changed, but not the accessed date. Does the accessed date change only when the files are opened or read, changed or not? Note: I use Directory Opus 9, a replacement for windows file browser. Thanks, Marshall

    Read the article

  • Read-only file system RHEL

    - by gthm geeky
    I am using a RHEL 5.5 on my PC. I was playing around with chmod and chown. suddenly my home folder become read-only. all the folders in /home/goutham/, where goutham is username, became read-only. I can delete files after turning on system for few seconds, after that it says Permission denied:read only file system. I cant even create folder with sudo mkdir also. Please help me. My os is on /dev/sda3

    Read the article

  • Only show hidden files in certain directories

    - by Joseph Silvashy
    On a Unix system (or more specifically on OS X) is it possible to show hidden files in only some directories? For example as a developer I want to see the hidden files in my Rails projects but not on my desktop as well. I guess I'm just tired of seeing all these .DS_Store and .trashes files swimming around, any remedies not directly related are welcome too!

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • Umount stale glusterfs partition

    - by Khaled
    I am using glusterfs on several Ubuntu servers: two of them are running glusterfs servers in replication mode. Without any clear error, the glusterfs partition became stale and the system shows this error when I try to access the stale partition: Transport endpoint is not connected Also, when running ls -l on the parent folder I get: d????????? ? ? ? ? ? myfolder I tried all types of commands that I can find to umount this partition, but I could not get it done: umount -l /path/to/mount/point umount -f /path/to/mount/point Also, using fuser command to show processes accessing this folder did not work. Unload the fuse kernel module can not be done as it is clear from the kernel config that fuse is built into the kernel and not a loadable module. I found this line in /boot/config-2.6.32-24-server CONFIG_FUSE_FS=y I have been left with two options: Reboot the system. Create another mount point like myfolder2 and mount this again using sudo glusterfs -f /etc/glustefs/glusterfs.vol /path/to/folder2. Of course, I have chosen to go with option 2. Anyone faced such an issue before? Anyone has a better solution for such a case?

    Read the article

  • zfs rename/move root filesystem into child

    - by Anton
    Similar question exists but the solution (using mv) is awful because in this case it works as "copy, then remove" rather than pure "move". So, I created a pool: zpool create tank /dev/loop0 and rsynced my data from another storage in there directly so that my data is now in /tank. zfs list NAME USED AVAIL REFER MOUNTPOINT tank 591G 2.10T 591G /tank Now I've realized that I need my data to be in a child filesystem, not in /tank filesystem directly. So how do I move or rename the existing root filesystem so that it becomes a child within the pool? Simple rename won't work: zfs rename tank tank/mydata cannot rename to 'tank/mydata': datasets must be within same pool (Btw, why does it complain the datasets are not within same pool when if fact I only have one pool?) I know there are solutions that involve copying all the data (mv, or sending the whole dataset to another device and back), but shouldn't there be a simple elegant way? Just noting that I do not care of snapshots at this stage (there are none yet to care of).

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >