Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 12/101 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Out of disk space on 4GB partiton yet it's only using 2GB

    - by Camsoft
    I'm running Ubuntu and have had a problem where the root partition has run out of disk space. When I perform df -h I get the following: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 4.5G 0 100% / Yet there are only 2GB of files actually using up this partition. I then ran the following df -i and I get the following: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 305824 118885 186939 39% / I have no idea what the -i flag does but it clearly shows that only 39% is used. Can anyone explain where my disk space has gone?

    Read the article

  • Cannot find my hard disk while installing linux-“No root file system defined” error

    - by Syam Kumar S
    I am trying install Linux on my computer (tried Ubuntu 10.4 and Linux Mint 9). I started the installation wizard and on the hard disk selection page the hard disk is not displayed. I have a 500GB disk with 5 partitions and windows 7 ultimate in one partition. If I click the forward button, it shows an error- "No root file system defined". I have tried to install by booting from CD and pendrive but both shows the same error. When I load Linux as live CD it doesn't show the hard disk. My hard disk works fine in windows 7. System config: intel i3 2100, 500GB hdd, 2GB ram

    Read the article

  • zram trimming by writing zero pages

    - by qdot
    I'm using ZRAM as a backing block device for /tmp filesystem in the following manner: echo 8000000000 > /sys/block/zram0/disksize mkfs.ext4 -O dir_nlink,extent,extra_isize,flex_bg,^has_journal,uninit_bg -m0 \ -b 4096 -L "zram0" /dev/zram0 mount -o barrier=0,commit=240,noatime,nodev,nosuid /dev/zram0 /tmp chmod aogu+rwx /tmp It works out reasonably well for me - however, there is an issue here - when files are removed, they are not zero'ed, so the ZRAM does not remote the compressed pages. Obviously running dd if=/dev/zero of=/tmp/ZERO bs=1M count={free-space-some-rest}; rm /tmp/ZERO clears it up in the ZRAM - it gets notified of zero-pages and shrinks the store. How can I get ext4 to zero used pages on delete? Also, any other suggestions on how to optimize it?

    Read the article

  • What does the filesystem give me?

    - by Alec
    I'm writing a specialised database. It currently sits atop of ext4, with one big file that it reads and writes to. I'm wondering whether it would be worth forgoing the filesystem and reading and writing directly to the device. I already use O_DIRECT, so as far as I can tell it wouldn't require much of a code change. What might the risks, advantages and disadvantages of forgoing the filesystem be? Is it likely to improve performance, or harm it?

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

  • can 'Percona MySQL Data Recovery' be used to recover dropped tables if the datadir filesystem is mounted as /

    - by Tom Geee
    according to Percona: Unmount the filesystem or make it read-only if... You have filesystem corruption OR You have dropped tables in innodb_file_per_table format If I have innodb_file_per_table enabled, and accidently dropped a table, while the datadir is mounted as within the / partition , can data still be recovered? Obviously you can't work with an unmounted root filesystem. Our VPS host has a defaulted filesystem table which we cannot customize. I was wondering in case of any future scenario. edit: would mounting the / filesystem through NFS onto another system as read-only be a workaround? TIA.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • How to convert ext3 partition to use encrypted file system without loosing data?

    - by User1
    My embedded Linux device have 2 partitions: small root partition containing OS. big data partition which uses ext3 I want to encrypt the data partition by using encrypted file system. I don't want loose any data of the partition. Size of the root partition is too small to hold all data of the data partition. It is not possible to use any external data storage. Is there any tools that can convert filesystem of the data partition from ext3 to encrypted fs without copying all files to other place?

    Read the article

  • Folder/File permission transfer between alike file structure

    - by Tyler Benson
    So my company has recently upgraded to a new SAN but the person who copied all the data over must have done a drag n' drop or basic copy to move everything. Apparently Xcopy is not something he cared to use. So now I am left with the task of duplicating all the permissions over. The structure has changed a bit ( as in more files/folders have been added) but for the most part has been stayed unchanged. I'm looking for suggestions to help automate this process. Can I use XCopy to transfer ONLY permissions to one tree from another? Would i just ignore any folders/permissions that don't line up correctly? Thanks a ton in advance, Tyler

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Local links ( in browsers ) on *nix systems

    - by meder
    On Windows I can access files directly from the browser ( or at least I have it configured currently, forget if it was native like this ) with the file:// protocol, so I can access files from say the C drive. I'm wondering what the equivalent would be to accessing my files from the browser, if at all possible on a *nix system such as Debian.

    Read the article

  • On Windows 7, the last access date is not changed even after reading the file?

    - by Jian Lin
    I have some files on Windows 7, and want to see what time it was that I read it this morning (February 27 morning), but when I right click on the file and choose Properties, I see Accessed: Yesterday, Feb 26, 2011, 2:12:37PM so I open the file to read the content again, and then open up the Properties again, and still the Accessed (date) is the same (Feb 26). Even if I add a column to the folder for "Date Accessed", it still shows Feb 26. But today is Feb 27 and clearly I have "accessed" it... so how can I see the true last accessed date?

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • External hard drive FAT32 to NTFS conversion fails

    - by Pieter
    I'm trying to convert the FAT32 file system of an external hard drive to NTFS. Here's what happened: C:\Windows\system32>chkdsk G: The type of the file system is FAT32. Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. C:\Windows\system32>cd \ C:\>convert g: /fs:ntfs The type of the file system is FAT32. Enter current volume label for drive G: PIETEREXT Volume PIETEREXT created 3/19/2008 12:43 Volume Serial Number is 1806-2E30 Windows is verifying files and folders... File and folder verification is complete. Windows has scanned the file system and found no problems. No further action is required. 488,264,768 KB total disk space. 72,192 KB in 1,503 hidden files. 1,281,792 KB in 40,029 folders. 309,235,168 KB in 199,915 files. 177,675,584 KB are available. 32,768 bytes in each allocation unit. 15,258,274 total allocation units on disk. 5,552,362 allocation units available on disk. Determining disk space required for file system conversion... Total disk space: 488384001 KB Free space on volume: 177675584 KB Space required for conversion: 975155 KB Converting file system The conversion failed. G: was not converted to NTFS I looked at the TechNet page for my error, but after closing every app the conversion was still failing halfway through. Why does it keep failing? I kept an eye on Task Manager but it didn't look like my system resources were near depletion. I'm using Windows 8.

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • scp No such file or directory

    - by Joe
    I've a confusing question for which superuser doesn't seem to have a good answer, and neither google. I'm trying to scp a file from a remote server to my local machine. The command is this scp user@server:/path/to/source/file.gz /path/to/destination The error I get is: scp: /path/to/source/file.gz: No such file or directory user is my username on the server. The command syntax appears fine to me. ssh works fine and I can cd to the file and it doesn't seem to be an access control issue? Thanks; Edit: Thank you John. I spotted the issue. ls returned this: -r--r--r-- 1 nobody users 168967171 Mar 10 2009 /path/to/source/file.gz So, the file was on a read-only file system and user is able to read it but not scp. I just copied the file to a different directory and chown it and worked fine. It would be good if someone can explain why this is the case though.

    Read the article

  • Archiving old, outdated hard drives

    - by Calvin
    I have a number of old hard drives. I've decided to throw them out. But before I do that, I'd like to keep the contents of hard drives, intact. I tried to use the ISO file format to archive but the major problem is that it loses file attributes and can't create directories with exceeding depth of 8 levels. I do have drives over a variety of file systems; FAT, NTFS, ext2, ext3 and HFS and I'd like to archive them without any loss of information.

    Read the article

  • Program complains not enough disk space even if the disk space exists

    - by user1189899
    I have an EXT3 partition mounted in ordered data mode. If a power failure occurs when a program is creating files on that partition, I see that space usage reported is normal and I don't see any partial written files. But when I try to run the same program again after the system comes back up it complains that there is not enough disk space. Even though the free space reported is far more than required. The program always succeeds in normal conditions. Also the problem seems to disappear when the partition is remounted. I was wondering what could be the right way to handle the situation other than unmounting and remounting.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >