Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 17/101 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Best way to compare (diff) a full directory structure?

    - by Adam Matan
    Hi, What's the best way to compare directory structures? I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup. Something like: Local file Remote file Compare /home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL /home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT Of course, the tool can be ready-made or an idea for a python script. Many thanks! Udi

    Read the article

  • Can Airport Extremes handle NTFS external drives?

    - by Electrons_Ahoy
    I've got an Airport Extreme and an external USB Hard Drive formatted with NTFS. (And a LAN of Windows XP Machines.) The drive works perfectly when connected directly to a PC. When it's connected to the AE, however, the Airport Utility sees the drive and lists it in the Disks list, but the drive doesn't appear on the network (as near as I can tell.) Can the AE handle NTFS formatted disks? The documentation is vague on that point.

    Read the article

  • Question about the linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • What's the best way to format an external HDD for both OSX and Windows ?

    - by George Profenza
    I have an external HDD (1TB) and I'd like to use it on OSX and Windows. I had another external HDD using NTFS and I used NTFS-3G on osx to write files, but I found the reading/writing very slow. Googling a bit I see many people recommend HFS+ in conjuction with HFS Explorer for Windows. Is this the best way ? Is it possible to have two partitions, one HFS+ and one NTFS ? Is it a good option or is it better to use one partition ? I've seen this thread on using UDF for USB flash drive. Would that be suited for an USB external HDD ?

    Read the article

  • Is there a successor to NTFS?

    - by hak8or
    What I am asking is if there is any file system that is known to be a possible successor of NTFS? I am asking because I just bought a new external, and realized that the path to a file, including the file name itself, cannot add up to more than 255 characters. This is known as the "Long File Name" by microsoft. I am assuming this is due to the file system limitation, so I am searching for any possible alternatives. I have a windows 7 based machine, but I am under the assumption that there would be third party software that would work with windows to make the new file system accessible by windows explorer.

    Read the article

  • There is not enough space on the disk when there is?

    - by Lee Tickett
    Permissions are fine (inherited) and checking effective permissions everything is AOK. As you can see i can make a file in the docs folder but not the pdf_docs subfolder. The folder has a lot of files and is quite large- i wonder if i've reached a limit? I couldn't find anything on google. Size: 51.0 GB (54,819,804,885 bytes) Size on disk: 52.0 GB (55,925,719,040) Contains 554,697 Files EDIT I've just checked and i can delete files... and for every file i delete i appear to be able to create a new one. This definitely points toward a limit in terms of number of files?

    Read the article

  • DFS - Stop sync of large folder that has since been removed

    - by g18c
    We have a site to site DFSR on Windows Server 2008 R2 that has been running perfectly between site A to site B until someone dumped a 20GB folder. This has overwhelmed the upload and make the internet almost useless at site A (the upload is low at the branch office). We have removed this folder from the DFS share on site A, however the internet is still really slow. Is there any way to cancel this sync or other way to get DFSR back in to a happy state?

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • create replica of ext4 filesystem and re-use it

    - by Jatin
    Is there a way that I can use my Linux ext4 file system, as such and then use it on some other computer. I have a dual-boot of Windows 7 and Ubuntu 10.04 and my partition table looks like this: My question might not be clear, so explaining it with an example. Can I copy my Linux partition on a flash drive and then use it on a different PC, with or without any need to install Ubuntu on new PC, by simply booting from the copied ext4 partition. This way, I can easily port my Ubuntu packages and other applications, settings etc. from one PC to other. If it's a very stupid question, please don't mind.

    Read the article

  • Are flash drives and hard drives thought of as "an ocean of bytes"?

    - by Jian Lin
    Why can a USB Flash drive be formatted as NTFS or FAT32? Is the USB Flash Drive and Hard Drive just to be thought of as "an ocean of bytes"? I get very used to hearing formatting a hard drive as FAT32 or NTFS, but we can also format a USB Flash drive as NTFS or FAT32? Is it because a hard drive or Flash drive both can be thought of as "an ocean of bits" or "an ocean of bytes"? I remember RAM as: it takes 16 bit or 32 bit as an address signal (the 16 or 32 copper footing on the circuit board), and give out 8 bit of data (the other 8 copper footing on the circuit board). So can a hard drive be thought of as working that way too? So that's why a Flash drive can be the same too? Just an "ocean of bytes". But is it true that hard drive's hardware make it an ocean of sector or something else, that is, the smaller unit of read / write is not byte but something else? So with this "ocean of bytes", NTFS has the format that says, "if the first byte is __, then it means __ (it is a file or folder, and link to which sector, indicated by byte 2 and 3, etc, etc)"

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • How to remove $data stream from file in windows 8

    - by chris.w.mclean
    Windows for a while now has added an additional hidden stream to files that were downloaded from the internet. If you attempted to use these files, you'd get all kinds of odd behavior as windows was detecting this additional stream and then preventing the app / exe from getting all sorts of security clearance. But in previous versions of windows you could right click on a file, go to properties then click 'Unblock' which removed the extra stream. Windows 8 seems to be doing the additional streams trick, but I haven't yet found a way to remove them using the win 8 UI. Anyone know how to do this?

    Read the article

  • Large temp files created in Windows Server 2003 temp folder

    - by BlueGene
    I'm managing a Windows Server 2003 with around 30 GB space in primary partition. A couple of times the server has crashed with error message saying that the C: drive is full. After searching folders to free up space, I found that lot of temp files being created in C:\WINNT\Temp and some of them of enormous size with more than 2GB. The temp files have common name, Efs###.tmp. Since we encrypt files frequently using Windows's EFS, I initially suspected Windows encryption. But after reading the documentation, I found that Efs###.tmp are in fact created by EFS but they are created only under the folder which you're currently encrypting, not in Temp folder. This looks very strange since Efs##.tmp files shouldn't be created under C:\WINNT\Temp unless someone tried to encrypt that Temp folder itself. The server has Tivoli Backup client. Could that be messing with windows Encryption? Can anyone shed some light on what could be causing the issue?

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Windows 7 ReadyBoost - What File System To Use With Flash Card/Drive?

    - by Boris_yo
    NTFS, exFAT or FAT32? I know that FAT32 has a limit of 4GB transfer per file, but is it faster and better than NTFS or exFAT? Since Windows 7 by itself uses NTFS, it remains logical to format flash card/drive with NTFS file system, however will NTFS or even exFAT be fine for flash card/drive? P.S. In case i decide to use SD flash card, what should i do if it is already plugged in and i decide to use another SD flash card in order to transfer photos? What will happen if i take out suddenly ReadyBoost SD flash card?

    Read the article

  • Win7 System folder contains infinitely looping SYSTEM(!) directory

    - by Matt
    My Windows 7 Enterprise computer has been crashing fairly frequently recently, so I decided to boot up in safe mode and run the TrendMicro client I have installed. It froze about 10 minutes into the full system scan, so in the spirit of http://whathaveyoutried.com, I started scanning each folder individually. When I got to ProgramData, the AV failed with an uncaught exception. I then went down a level and tried scanning Application Data, which failed as well. Imagine my surprise when I open the folder just to see the same folder again! As far as I can tell, this folder loop continues indefinitely. (If you are trying to recreate this, keep in mind that ProgramData is a hidden folder.) I'm actually a bit concerned that these are system folders, as this is a brand-new computer with a clean installation. I guess I have three questions: Has anyone else seen/experienced this before? I'm running Win7 SP1. How do I fix this? I've run CHKDSK \F with no success (although it was incredibly slow). What are the ramifications of an infinitely recursive directory? Theoretically speaking, each link takes up memory, so shouldn't I have no space available on my hard drive? (I've got about 180GB left.) I noticed that the tree view on the left only shows the "linked folder" icon on the deeper folders--does this mean anything special? (I've circled the icons or lack thereof in red.) How can the OS even resolve this aberration? And above all, what would happen if I were to select "Expand all folders"??? :P Matt

    Read the article

  • Why can't I copy a 7 GB file to an external USB HD with 120 GB free?

    - by Johann Gerell
    Yes, why can't I? I was stashing away some old photography backup zips last night. I could copy 4 of my 1 GB backup zips to my external USB connected hard-drive when I got the error message "Cannot copy file. Not enough free space." (sort of) for a zip of roughly 7 GB. But there are 120 GB free. Why is this? EDIT: Clarification - the files that I could copy was smaller than 4 GB. The failing one was 7 GB. The cause seems to be the FAT32 4 GB limit.

    Read the article

  • Formatting 1TB External Drive - Mac/PC

    - by The Woo
    We have 1 mac user in a PC environment... and I have bought a 1TB WD external hard drive and need to format it so that both PC and Mac can read/write to it. Doing this from the mac should be easy, but I do not know where to format the drive from, and what is the best option to format it to. Thanks.

    Read the article

  • How to get a maximum file size of VZFS parition?

    - by Nulldevice
    I have a VPS hosting with a VZFS file system. How can I determine maximum file size of VZFS partition? UPD: Free space (or total space) is not what i need. Sometimes file cannot occupy a hole partition volume - fat16 with 2Gb limit is a good example. I need to use a large database file (say, 64Gb) and so I need to know if a file system of VPS hosting will cope with it. It is easy to calculate for an ext3 filesystem using tune2fs, but VPS uses VSFS by Virtuozzo, and it is documented bad. Is it any generic way to calculate maximum file size for some filesystem in linux?

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • Access denied on file system for System Administrator

    - by NLV
    Hello Yesterday I got win32.Saltiy virus and did some damage before my Kaspersky suite caught it. Now I've cleaned all the viruses using Kaspersky but I believe the changes it did to the registry/policies are still there. I'm not able to have write access on the entire file system. It is showing up the access denied the error. I'm in the local system administrators group. I've tried removing and re-adding it (with a reboot). But still no luck. Any ideas on how can I fix this?

    Read the article

  • GlusterFS as elastic file storage?

    - by Christopher Vanderlinden
    Is there any way to run GlusterFS in a replicated mode, but with the ability to dynamically scale the volume up and down? Say you have 3 servers all running glusterd. your Gluster volume would have to be setup with replica 3 gluster volume create test-volume replica 3 192.168.0.150:/test-volume 192.168.0.151:/test-volume 192.168.0.152:/test-volume You would then mount it as say \mnt\gfs_test What happens when I want to add 2 more servers to the storage pool and then also use them in this volume? Is there any easy way to expand the volume AND increase that replica count to 5? My end goal is to run this on EC2 instances, say 3 Apache front ends, with the webroot setup on the gluster volume mount. My concern is that if I ever need to spin up a server, I would want the server to not only be an additional Apache front end, but also another server in the gluster file system, adding to fault tolerance as well as possibly giving a slight boost in read speed. Maybe there are better options that would fit the bill here? Thanks.

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >