Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 16/35 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • What is the best vfat driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • Debian crashed, file system is read-only and cannot backup - How Do I find/mount a USB drive?

    - by Spiros
    We have a Debian server (vm's) here at work and the server crashed after a power failure. I can only boot the system in maintenance mode, and the whole file system is set to read only. I can run fsck though maintenance mode, however I would like to get a backup of some files before I do. Problem: I cannot access the net since there is no network connectivity in maintenance mode, and for some reason I try to add a USB flash drive to the computer but I can't find it through the console. Question: how to you find/mount a usb drive on Debian? I have tried several resources from the internet but nothing worked. Is there any other way I could get a backup of my files? I cannot start networking since the filesystem is set to read only. Any help would be appreciated.

    Read the article

  • Hiding a directory through the FAT table

    - by hennobal
    I've looked into the FAT file system, trying to find a way to make a directory hidden from view of the user. This has been done with malware previously, so it should be possible. The SpyEye trojan hid inside a directory C:\cleansweep.exe\ which was only reachable through the command line. I know deletion is possible by substituting the first character of the directory in the FAT table with 0xE5, but then it will not be accessible. Any ideas on how the scenario from SpyEye can be recreated? Any filesystem is interesting, but ideally FAT or NTFS.

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • VFS and FS i-node difference

    - by gaffcz
    What is the difference between VFS i-node and FS (e.g. EXT) i-node? Is it possible that EXT i-node is persistent (contains/points to data blocks), but VFS i-node is created just in i-node cache after read/use of EXT i-node? Or the VFS i-node is just an image of FS i-node (it's the same) and i-nodes in those systems, which are not working with i-nodes (e.g. FAT, NTFS) has to be emulated (HOW?) to allow VFS work with those FS like they would support i-nodes?

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Is there a program that will show a tree of the differences in two file trees?

    - by Huckle
    In windows I manually back up from time to time by formatting my external drive and copying the contents of my data partition over. Inevitably there is a difference in the number and size of the files copied because of system files, etc. Is there a program that would diff two directories recursively and compile the differences into a nice GUI tree that I could peruse (preferably filter) to ensure that everything I want made it over to the drive? It should only show files that are not in both directories. (Also, please ignore the inadequacy of my backup solution)

    Read the article

  • Trying to move Users And Program Files Directory's to another partition

    - by Jharwood
    Currently I've Followed this Guide: http://lifehacker.com/5467758/move-the-users-directory-in-windows-7 Pointed my C:\Users, C:\Program Files (x86), C:\Program Files directory's to their respective counterparts on the B: drive. I used mklink /J D:\Users B:\Users (D was the C: drives name in recovery) but when I come to boot, all I get is that the profile can't be loaded. I have to accomplish this, and don't really mind reinstalling as its a fresh install anyway.

    Read the article

  • using one disk as cache for others

    - by HugoRune
    Hi Given a PC with several hard drives: Is it possible to use one fast disk as a giant file cache? I.e. automatically copying frequently accessed data to that one disk, and transparently redirecting reads and writes to that disk, so that other drives would only have be accessed occassionally. (writes would have to be forwarded to the other disks after a while of course) Advantages: the other drives could be powered down most of the time; reducing power, heat, noise speed of the other drives would not matter much. cache disk could be solid state. How can I set such a system up? What OS supports these options? Is this possible at all using Windows or Linux?

    Read the article

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    What is the best file system to use for a second hard drive when dual booting between WinXP and Win7? I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • running a web server with encrypted file system (all or part of it)

    - by Carlos
    Hi, I need a webserver (lamp) running inside a virtual machine (#1) running as a service (#2) in headless mode (#3) with part or the whole filesystem encrypted (#4). The virtual machine will be started with no user intervention and provide access to a web application for users in the host machine. Points #1,#2 and #3 are checked and proved to be working fine with Sun VirtualBox, so my question is for #4: Can I encrypt the all filesystem and still access the webserver (using a browser) or will grub ask me for a password? If encrypting the all filesystem is not an option, can I encrypt only /home and /var/www ? will apache/php be able to use files in /home or /var/www without asking for a password or mounting these partitions manually? Thanks

    Read the article

  • How do I make an encrypted disk image on Debian?

    - by Blacklight Shining
    I'm basically looking for an equivalent to OS X's encrypted sparsebundles. The solution should have support for file ACLs and should not force me to specify a size in the beginning (the image should only take up as much space as it needs) or require root access to mount and unmount. Ideally, I should be able to set two different passwords (both for the same data), but that's not too important. (I do have root access to the machine and so can install packages and such, but I would rather not have to sudo just to mount an image.)

    Read the article

  • How can I access user files on a disk moved from a Windows 7 machine to an XP machine?

    - by Fantius
    I moved the hard drive from one machine (Win 7) to another (XP) and now certain folders tell me "Access denied". I am logged in as an administrator. I had a different account on the other machine. Neither account authenticated to anything besides the local machine. The old machine is apparently dead, so I can't do anything in there like change permissions, etc. How can I access these files? Edit: After changing the ownerships of all the files and folders on the drive, I am getting a different error. And it is troubling me deeply. "xxx refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location." No change after rebooting. Any ideas? Surely the files are still there, right?

    Read the article

  • Drive system file size

    - by rezx
    When i made a new drive it take some space for system file FAT32 take the less space, then NTFS, then ext4 my question how to know the space will be taken for the system before make the drive, if the drive 1giga or 100giga for FAT32, NTFS, ext4. Edit: when make 10MB drive with FAT32 the size shown 9.9 when make 10MB drive with ext4 the size shown 8.1 the same thing with the bigger size there always some space used and there is no files on the drive, so where this space go, if it for the filesystem how i can calculate the space that will be taken before format the drive

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Does ZFS replace the need for hardware/software RAID?

    - by user53744
    I want to provide protection against data loss on my servers. Typically, I'd use hardware RAID 1 or 5, but I've been reading up on ZFS. Is it correct that ZFS itself provides RAID 1 or 5 like data protection WITHOUT needing a RAID controller card? If so, I assume a single hard drive is not enough to provide data protection since if that drive fails, all data fails, so how many hard drives do I need to be running for ZFS to provide this protection?

    Read the article

  • Should all my files be stored on my shared partition?

    - by James
    I am setting up a tripple boot HD and was going to use a 4th partition to share files between OS's. I was wondering if there is any point in having much space on each OS partition to store files or if I just make the shared partition big and put everything on that? Is there any difference in speed between accessing files on the shared partition vs the native files? Are there any other benefits/disadvantages of having files on either the native/shared partition? EDIT: OS's in question are Windows 7, Ubuntu 12.04, and OS X 10.7.4.

    Read the article

  • Experience with MooseFS?

    - by brown.2179
    Anyone have any experience using MooseFS? I want an easy distributed storage platform to store static data archive of about 10 TB and serve it to 20-40 nodes. Also I want to be able to add storage as the archive grows without having to rebuild the filesystem. I don't care if it's a bit slow. I just want it to be simple and stable. Basically from what I can see for OS X it's between MooseFS and Gluster. Any other suggestions?

    Read the article

  • Why is an Ext4 disk check so much faster than NTFS?

    - by Brendan Long
    I had a situation today where I restarted my computer and it said I needed to check the disk for consistancy. About 10 minutes later (at "1%" complete), I gave up and decided to let it run when I go home. For comparison, my home computer uses Ext4 for all of the partitions, and the disk checks (which run around once week) only take a couple seconds. I remember reading that having fast disk checks was a priority, but I don't know how they could do that. So, how does Ext4 do disk checks so fast? Is there some huge breakthrough in doing this after NTFS came out (~10 years ago)? Note: The NTFS disk is ~300 GB and the Ext4 disk is ~500 GB. Both are about half full.

    Read the article

  • Changing filesystem types "safely"

    - by warren
    Back in Windows 95 OSR2 (I believe), there was a conversion tool that would take your extant FAT16 partition and change it to FAT32 non-destructively (most of the time). Are there any tools like that now for going from one file system type to another in situ without destroying the data? For example, from etx3 to ext4? Or NTFS to XFS?

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >