Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 4/35 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is ext4 ready for a production usage ?

    - by Konstantin
    Hi What do you think about ext4 filesystem in the production enviroment ? We are very close to launching our project that will use tens of millions quite often updated not very big files and we need to decide which FS to use. For a while our considerations about other linux FS are: Ext3 is rock stable, but not very well suited for handling millions small files XFS looks very nice, probably we'll use it ReiserFS ... well...vague future, who will end up fixing bugs ?

    Read the article

  • Can't Delete Old Windows Directory

    - by David Mullin
    I got a new SSD drive for my computer, and have installed Windows on this drive. This left an old Windows directory on my old normal drive. I am now attempting to delete this old Windows directory, but am getting blocked by security. If I crawl down into each subdirectory, I can manually change the ownership and access rights for each file, but if I attempt to do it from the root directory, I get a "Failed to enumerate objects in the container. Access is denied" error. I have tried logging in as local Administrator, but this had the same effect. I figure that I am missing something stupid, but I just can't determine what it is.

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • What is meant by "streaming data access" in HDFS?

    - by Van Gale
    According to the HDFS Architecture page HDFS was designed for "streaming data access". I'm not sure what that means exactly, but would guess it means an operation like seek is either disabled or has sub-optimal performance. Would this be correct? I'm interested in using HDFS for storing audio/video files that need to be streamed to browser clients. Most of the streams will be start to finish, but some could have a high number of seeks. Maybe there is another file system that could do this better?

    Read the article

  • mac os x - detect file system read

    - by quano
    I want to know what files a specific application is trying to access on my disc. I know that you can use fs_usage, but this outputs events from all applications. I know that you can target a single application, but only one that is already running. I want to detect all readfile-events an application is trying to do, ever since it is started. I don't want to miss out on any event. How do you achieve this?

    Read the article

  • cPanel Virtfs won't umount

    - by JPerkSter
    Anyone have any experience with virtfs on cPanel servers? I can't seem to get them to unmount, as they say they are already unmounted: [root@Server ~]# cat /proc/mounts | grep user /dev/root /home/virtfs/user/lib ext3 rw,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/opt ext3 rw,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/sbin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/man ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/X11R6 ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/kerberos ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/libexec ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/Zend ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/IonCube ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/include ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/spool ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/lib ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/cpanel ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/run ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/log ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda6 /home/virtfs/user/tmp ext3 rw,nosuid,nodev,noexec,noatime,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/bin ext3 rw,errors=continue,data=ordered 0 0 [root@Server ~]# for i in cat /proc/mounts |grep virtfs |grep user |awk '{print$2}'; do umount $i; done umount: /home/virtfs/user/lib: not mounted umount: /home/virtfs/user/opt: not mounted umount: /home/virtfs/user/usr/lib: not mounted umount: /home/virtfs/user/usr/sbin: not mounted umount: /home/virtfs/user/usr/share: not mounted umount: /home/virtfs/user/usr/bin: not mounted umount: /home/virtfs/user/usr/man: not mounted umount: /home/virtfs/user/usr/X11R6: not mounted umount: /home/virtfs/user/usr/kerberos: not mounted umount: /home/virtfs/user/usr/libexec: not mounted umount: /home/virtfs/user/usr/local/bin: not mounted umount: /home/virtfs/user/usr/local/share: not mounted umount: /home/virtfs/user/usr/local/Zend: not mounted umount: /home/virtfs/user/usr/local/IonCube: not mounted umount: /home/virtfs/user/usr/include: not mounted umount: /home/virtfs/user/usr/local/lib: not mounted umount: /home/virtfs/user/var/spool: not mounted umount: /home/virtfs/user/var/lib: not mounted umount: /home/virtfs/user/var/cpanel: not mounted umount: /home/virtfs/user/var/run: not mounted umount: /home/virtfs/user/var/log: not mounted umount: /home/virtfs/user/tmp: not mounted umount: /home/virtfs/user/bin: not mounted umount: /home/virtfs/user/dev: not mounted umount: /home/virtfs/user/proc: not mounted

    Read the article

  • Does this file format exist?

    - by Jon Chase
    Is there a file format that handles the following use case... I'd like to create a tar file (or whatever - I'm just using tar here b/c it's a well known file format for containing multiple files) that would be usable even if I only had access to specific chunks of said file. For example, say I tar up my mp3 and photo collection into a 100GB tar file, then put the file into some long term storage somewhere. Later, I want to access a specific mp3 file. I don't want to download the entire 100GB tar file just to get to one mp3. In fact, let's say I can't download the entire 100GB tar file. Instead, I'd like to say "give me megabytes 10 through 19 of the 100GB tar file" and then have the mp3 magically extracted from those 10 megabytes. Does a file format like this exist?

    Read the article

  • How to recover a file using the FAT cluster chain instead of using the stored length in the FAT table?

    - by cadrian
    I'm trying to recover movie files from my TNT receiver hard drive but it corrupts its FAT32 allocation table (crappy cheap device...) Using dosfsck is useless because the correct file length is the cluster length, not the (shorter) one in the table, and dosfsck only proposes to shorten the file, which I won't do. Question: how to recover a file using the FAT cluster chain instead of using the stored length in the FAT table? Edit I forgot to say: Linux solutions only please (I have no windows box)

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Can't copy files with 'additional permissions' to ext4 drive -- files that have @ after permissions,

    - by 99miles
    I am copying files from Snow Leopard to a mounted ext4 share via Samba, that's on a Fedora machine. Some files cannot be copied, and give this error: The operation can’t be completed because you don’t have permission to access some of the items. I've noticed that the files that can't be copied have an @ at the end of their permissions whien I do 'ls -l' in the command line. For example, I can copy the second file but not the first: -rwxrwxrwx@ 1 miles staff 1448 May 14 22:55 test.txt -rw-r--r-- 1 miles staff 136 Apr 5 17:06 image.psd.zip From what I've found, the @ means the file has 'additional properties'. Does anyone know how I can resolve this issue so I can copy the files to the fileshare?? Thanks!

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Which file system to use for portable hard drive shared among different operating systems?

    - by Jonathon Watney
    Something similar has been asked already but my criteria is a little different. I need to share a portable hard drive (USB/Firewire) between Mac OSX, Linux and Windows XP systems where the files being shared are sometimes 4GB. Is there a file system that is available out of the box on all these operating systems that support this and allows read/write access? If not, what's the next best solution in terms of installing additional software on these operating systems?

    Read the article

  • Copy File Contiguously to Disk from OSX/Unix/Linux to FAT32 FS?

    - by alharaka
    So the Sysinternals guys have that cool contig.exe utility that allows me ensure a file is contiguous. I need to copy overs ISO files to a FAT32 USB flash key. Grub4DOS requires the files be continuous, but I do not have Windows access at the moment. Is there a way to copy a file so it is contiguous on the target drive, or a tool like the aforementioned that will make an existing file contiguous. Again, I need it on FAT32, and there lies the rub.

    Read the article

  • Removing duplicate files, keeping only the newest file

    - by pinkie_d_pie_0228
    I'm trying to clean up a photo dump folder, in which several files are duplicated but with different filenames or lost in subfolders. I've looked at tools like rmlint, duff and fdupes, but I can't seem to find a way to have them keep only the file with the most recent timestamp. I suspect I have to postprocess the results, but I don't even know where to start to do this. Can anyone guide me on how to get the duplicate files list and delete everything but the newest file?

    Read the article

  • How do I convert a Linux disk image into a sparse file?

    - by endolith
    I have a bunch of disk images, made with ddrescue, on an EXT partition, and I want to reduce their size without losing data, while still being mountable. How can I fill the empty space in the image's filesystem with zeros, and then convert the file into a sparse file so this empty space is not actually stored on disk? For example: > du -s --si --apparent-size Jimage.image 120G Jimage.image > du -s --si Jimage.image 121G Jimage.image This actually only has 50G of real data on it, though, so the second measurement should be much smaller. This supposedly will fill empty space with zeros: cat /dev/zero > zero.file rm zero.file But if sparse files are handled transparently, it might actually create a sparse file without writing anything to the virtual disk, ironically preventing me from turning the virtual disk image into a sparse file itself. :) Does it? Note: For some reason, sudo dd if=/dev/zero of=./zero.file works when cat does not on a mounted disk image.

    Read the article

  • Opinions on NTFS for Mac solution?

    - by AngryHacker
    I am currently using the free NTFS-3G to access my NTFS drive from the Mac. It seems pretty stable (except once in the very beginning, it locked up the Mac and corrupted my NTFS drive, which I then fixed with chkdsk from a PC). However, speed is NOT one of its virtues. In fact, it's painfully slow I've been looking at buying Paragon NTFS for Mac OS X 8.0. Their product comparison PDF claims nearly double the speed (vs NTFS-3G) in almost every category (read, write, etc...). In addition, there is now an unsupported native solution with the snow leopard. Can folks here share their experiences. Is the native solution stable enough to be used for daily work? Should I just go with Paragon?

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • gpfs: adding a new nsd server to a cluster

    - by alessandra
    I have a gpfs cluster composed by 10 linux nodes, managed by a primary server A, which also act as nsd server for a first stack of disks. I attached a new jbod to one of the nodes (call it node B), which I would like to become a nsd server for this new stack of disks, but still be included in the cluster so that the disks are available to all the nodes. Node B is connected to the cluster via ethernet. How can I make the new nsd seen by all the nodes of the cluster? I can create the new nsd but when trying to create the filesystem on node B it the command mmcrfs times out. It looks like the nodes of the cluster cannot understand the filesystem location even if I specify them attached to server B in the description file. Would it be better to remove node B from the cluster, create a cluster on its own with its attached filesystem and connect it remotely with the previous cluster? Or a clustered NFS solution would apply better? Can you please give me any suggestion?

    Read the article

  • Mac OS X - rmdir fails with "Operation not permitted" for a folder created by a PC on a removable dr

    - by maxint
    Hello. I have a problem (using Mac OS X 10.5.8) with the access rights of a folder that was presumably created by a virus on a disk-on-key drive when I used it with a PC. I can't remove the folder or change it's name. In Finder's Info window the Lock box is unchecked and uncheckable - if I try to check it it flips back to off. Please see the details: MaxBookAir:GARMIN'S maxint$ rmdir winamp_cache_0001/ rmdir: winamp_cache_0001/: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ mv winamp_cache_0001 test mv: rename winamp_cache_0001 to test: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ GetFileInfo winamp_cache_0001 directory: "/Volumes/GARMIN'S/winamp_cache_0001" attributes: avbstclinmedz created: 12/23/2009 14:34:52 modified: 02/13/2010 22:52:36 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -x winamp_cache_0001 File: "winamp_cache_0001" Size: 32768 FileType: Directory Mode: (0777/drwxrwxrwx) Uid: ( 502/ maxint) Gid: ( 20/ staff) Device: 14,5 Inode: 7439 Links: 1 Access: Wed Dec 23 00:00:00 2009 Modify: Sat Feb 13 22:52:36 2010 Change: Sat Feb 13 22:52:36 2010 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -r winamp_cache_0001 234881029 7439 040777 1 502 20 0 32768 1261506600 1266081756 1266081756 1261559092 131072 64 32768 winamp_cache_0001 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ ls -lTd winamp_cache_0001/ drwxrwxrwx 1 maxint staff 32768 Feb 13 22:52:36 2010 winamp_cache_0001/ MaxBookAir:GARMIN'S maxint$

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Linux: Case-INSENSITIVE Filesystem

    - by Quandary
    What methods are there to make the Linux filesystem case-INSENSITIVE ? I have asp.net applications developed on Windows, but there are always issues with capitalization/spelling on mono when putting it on Linux. One way is to mount a localhost SMB share to /var/www. Are there any others ?

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • What's the fastest filesystem for developer builds?

    - by Dan Fabulich
    I'm putting together a Linux box that will act as a continuous integration build server; we'll mostly build Java stuff, but I think this question applies to any compiled language. What filesystem and configuration settings should I use? (For example, I know I won't need atime for this!) The build server will spend a lot of time reading and writing small files, and scanning directories to see which files have been modified.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >