Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 32/101 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Creating a "mountable" File System, where to start?

    - by Mike Curry
    A friend and I are thinking about creating a simple file system for learning purposes. We're going to write it in C/C++, and try to get it to a mountable state from within linux. We've both been coding or over 16 years (32 combined), so I suppose its just a matter of finding some documentation, and a ton of learning. My question is, where could I find out more information? (Documentation for creating a file system, requirements of mounting a file system in linux, etc) Where do we start? Edit: I should also mention, this would not be a boot-able file system, just a file system used for storage, though I am not too sure if that matters or not.

    Read the article

  • Is it easier to write filesystem drivers in userspace than in kernel space?

    - by Jack
    I will use the Linux NTFS driver as an example. The Linux kernel NTFS driver only has very limited write support in the kernel, and after 5 years it is still considered experimental. The same development team creates the ntfsmount userspace driver, which has almost perfect write support. Likewise, the NTFS-3G project which is written by a different team also has almost perfect write support. Why has the kernel drive taken so much longer? Is it much harder to develop for? Saying that there already exists a decent userspace application is not a reason why the kernel driver is not compelte. NOTE: Do not migrate this to superuser.com. I want a programing heavy answer, from a programming perspective, not a practical use answer. If the question is not appropriate for SO, please advise me as to why so I can edit it so it is.

    Read the article

  • PHP file copy to another server; Access filesystem on other server

    - by dclowd9901
    I'm trying to write a PHP script to copy the files from your local machine to a server: $destination_directory = 'I:\path\to\file\\' . $theme_number; if(!@opendir($desination_directory)) { echo 'Sorry, the destination directory could not be found.'; die(); } I check the access to the destination folder with that process, and I keep getting the error return. Anyone know what I'm doing wrong? I pretty much have everything else in place. I just don't know how to access this other server.

    Read the article

  • How do I read and traverse inodes

    - by Eric Fossum
    I've opened the super-block and group descriptor in an EXT2 filesystem, but I don't know how to read for instance the root directory or files in it... Here's some of what i got fd=open("/dev/sdb2", O_RDONLY); lseek(fd, SuperSize, SEEK_SET); read(fd, &super_block, SuperSize); lseek(fd, 4096, SEEK_SET); read(fd, &groupDesc, DescriptSize); but this next part doesn't seem to work... lseek(fd, super_block.s_log_block_size*groupDesc.bg_inode_table, SEEK_SET); lseek(fd, InodeSize*(EXT2_ROOT_INO-1), SEEK_CUR); read(fd, &root, InodeSize);

    Read the article

  • PHP Filesystem Pagination

    - by Byron
    How does one paginate a large listing of files within a folder? I can't see any functions in the PHP documentation that mention any way to specify an 'offset'. Both glob() and scandir() simply return all the files in the folder, and I'm afraid that won't be a good idea for a huge directory. Is there any better way of doing this than simply going through all the files and chopping off the first X number of files?

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Is it a good idea to use only a key to encrypt an entire (small) filesystem?

    - by Fernando Miguélez
    This question comes as part of my doubts presented on a broader question about ideas implementing a small encrypted filesystem on Java Mobile phones (J2ME, BlackBerry, Android). Provided the litte feedback received, considering the density of the question, I decided to divide those doubts into small questions. So to sum up I plan to "create" an encrypted filesystem for for mobile phones (with the help of BoucyCastle or a subset of JCE), providing an API that let access to them in a transparent way. Encryption would be carried out on a file basis (not blocks). My question is this: Is it a good idea to use only a simmetric key (maybe AES-256) to encrypt all the files (they wouldn't be that many, maybe tens of them) and store this key in a keystore (protected by a pin) or would you rather encrypt each file with an on-the-fly generated key stored alongside each file, encrypting that key with the "master" key stored on the keystore? What are the benefits/drawbacks of each approach?

    Read the article

  • SOLVED - UBIFS partition mounting at startup [closed]

    - by Bartlomiej Grzeskowiak
    [SOLVED] - add ubi.mtd=volume_name to bootargs in uboot I want to mount UBIFS partition via /etc/fstab at startup. I created UBIFS and Volume: # ubiformat /dev/mtdX # ubiattach -p /dev/mtdX # ubimkvol /dev/ubi0 -N volume_name -s 64MiB # ubiupdatevol /dev/ubi0_0 /path/to/ubifs.img # mount -t ubifs ubi0:volume_name /mount/point but after reboot this line in etc/fstab doesn't work: ubi0:volume_name /mnt/user ubifs defaults 0 0 There is no fs mounted in /mnt/user. Also when I try to call mount -a: mount: mounting ubi0:volume_name on /mnt/user failed: No such device There are no ubi0,ubi0_0 in /dev. I also don't see any UBI calls in dmesg like here: UBIFS boot error

    Read the article

  • Deleting locked files with Java?

    - by Marcus
    We have to delete some directories and their contents using Java running on Windows. I was worried about running into the directory files being locked. We could just invoke Unlocker to do the delete. Or is there a more Java centric way to handle this situation?

    Read the article

  • How to get notified when a folder is accessed?

    - by smwikipedia
    I have a shared folder on my local machine. I want to get notified every time someone tries to access it. Could someone give me some hint on this? I have checked the FileSystemWatcher class, it only provides events for change/creation/delete/rename of the contents under the folder, which is not exactly what I want. I also tried to use the event log audition as shown here, but it is just not exactly what I want, either. Many thanks.

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Optimal directory structure for filesystem

    - by Pankaj
    We have large scale web application which has millions of customer. Each customer can have document based on document type. We may have 20-30 types of documents. We are planning to use GlusterFS for storing these documents. I'm trying to find out what are the limitations of Gluster as far as number of files/directories ? Do we need to have hierarchical directory structure ? What would be the optimal directory structure ? Does this make sense - CustmerId Documenttype File1 File2

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • What's the difference between PATH_NOT_FOUND and NAME_NOT_FOUND

    - by Benjamin
    In Win32 layer, we often meet ERROR_PATH_NOT_FOUND, ERROR_NAME_NOT_FOUND. When does WinAPI(eg CreateFileW, RemoveDirectoryW) return these values? And What's the difference? If I write a file system driver, when do I set STATUS_OBJECT_PATH_NOT_FOUND or STATUS_OBJECT_NAME_NOT_FOUND? I'm so confused. Is there anyone who can explain clearly? Or are there any documents explain this? I couldn't find them. Thanks in advance.

    Read the article

  • MAC : How to check if the file is still being copied in cpp?

    - by Peda Pola
    In my current project, we had a requirement to check if the file is still copying. We have already developed a library which will give us OS notification like file_added , file_removed , file_modified, file_renamed on a particular folder along with the corresponding file path. The problem here is that, lets say if you add 1 GB file, it is giving multiple notification such as file_added , file_modified, file_modified as the file is being copied. Now i decided to surpass these notifications by checking if the file is copying or not. Based on that i will ignore events. I have written below code which tells if the file is being copied or not which takes file path as input parameter. Details:- In Mac, while file is being copied the creation date is set as some date less than 1970. Once it is copied the date is set to current date. Am using this technique. Based on this am deciding that file is being copied. Problem:- when we copy file in the terminal it is failing. Please advice me any approach. bool isBeingCopied(const boost::filesystem::path &filePath) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; bool isBeingCopied = false; if([[[[NSFileManager defaultManager] attributesOfItemAtPath:[NSString stringWithUTF8String:filePath.string().c_str()] error:nil] fileCreationDate] timeIntervalSince1970] < 0) { isBeingCopied = true; } [pool release]; return isBeingCopied; }

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • FDs not closed in FUSE filesystem

    - by cor
    Hi, I have a problem while implementing a fuse filesystem in python. for now i just have a proxy filesystem, exactly like a mount --bind would be. But, any file created, opened, or read on my filesystem is not released (the corresponding FD is not closed) Here is an example : yume% ./ProxyFs.py `pwd`/test yume% cd test yume% ls mdr yume% echo test test yume% ls mdr test yume% ps auxwww | grep python cor 22822 0.0 0.0 43596 4696 ? Ssl 12:57 0:00 python ./ProxyFs.py /home/cor/esl/proxyfs/test cor 22873 0.0 0.0 6352 812 pts/1 S+ 12:58 0:00 grep python yume% ls -l /proc/22822/fd total 0 lrwx------ 1 cor cor 64 2010-05-27 12:58 0 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 1 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 2 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 3 - /dev/fuse l-wx------ 1 cor cor 64 2010-05-27 12:58 4 - /home/cor/test/test yume% Does anyone have a solution to actually really close the fds of the file I use in my fs ? I'm pretty sure it's a mistake in the implementation of the open, read, write hooks but i'm stucked... Let me know if you need more details ! Thanks a lot Cor

    Read the article

  • Can NTFS-Search(An OSS project) scan any file on NTFS volume?

    - by Benjamin
    I want to apply NTFS-Search to our project. Our project have to find the files which we specified.(fast and exactly!) But I'm not sure the program(NTFS-Search) works well. What if the file specified is system file? What if the file is being opened by a process with NO_READ_SHARE_MODE? Do you think NTFS-Search can find any files? I don't know about NTFS filesystem well. So I can't find the answer myself. Is there anyone who knows that? I tried to find their email address, but I couldn't find. Thanks in advance.

    Read the article

  • Understanding the concept of Inodes

    - by darkie15
    Hi All, I am referring to the link: http://www.tux4u.nl/freedocs/unix/draw/inode.pdf I am confused on parts: 1 12 direct block pointers 2 1 single indirect block pointer 3 1 double indirect block pointer 4 1 triple indirect block pointer Now the diagram says that each pointer is 32/64 bits. [Query]: Why and how are these values inferred? I mean why specifically have only 32 or 64 bit pointers? The diagram says, One data block{8 KB} for each pointer {4 bytes/8 bytes} [Query]: How does this actually work out? i.e. 8*1024 bytes / 8 bytes = 1024 bytes? What is the logic behind having a 8 bytes pointer for 8KB block? Regards, darkie.

    Read the article

  • PHP: Mapped Network Drives

    - by Abs
    Hello all, I have mapped a network drive to a computer in my home network. Now I am trying to access it via PHP - I did this quick test: echo opendir('Z:\\'); This gives me: Warning: opendir(Z:\) [function.opendir]: failed to open dir: No error in C:\wamp\www\webs\tester-function.php on line 3 What have I done wrong here? I don't want my users typing in the UNC path so is there a way to get the UNC path for them and maybe that will work when I try to access it? This is possible in Microsoft languages but I am not sure how to get PHP to do this - maybe using a cmd.exe command? Please note, the mapped drive does exist as I can see it and I can access it. It also does not appear to be a permissions problem as I am assuming it would of complained about this IF it could access that drive...right? Thanks all for any help

    Read the article

  • The risk of granting to IUSR* NTFS permissions on a folder on the server

    - by vtortola
    I have two web applications that must share a file in the server file system. Both apps are inside of "Inetpub\wwwroot". The file cannot be accessed freely from outside, so it is in a folder out of "Inetpub". I have granted full NTFS permissions to the user "IUSR_whatever" (is the user that runs IIS in anonymous requests) in that folder. The folder has only that file, and has no other use. It works so far :) But, what is the risk? what should I be afraid of? As I see it, as long the folder is out of the "InetPub" cannot be accessed, and as long the apps don't have any security flaw like "path traversal" or server side code injection, it should be safe enough.... But I'm always keen to be wrong :) What do you think? May the file or even the server itself get compromised because of this? Thanks.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >