Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 23/35 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Monitoring folders for changes

    - by blcArmadillo
    I'm working on a project that will require an application that watches a list of directories the user specifies for changes. Also, I'd like to give the users the option of running the application as a service or on an individual basis. Since users can choose to run it on an individual basis I don't think listening for some operating system event triggered by the addition or deletion of files (if such events exist) would be sufficient. I thought about maybe calculating a checksum for the deepest folder and then building up. I could then compare these checksums on subsequent scans to try and pinpoint where the changes have occurred. Would that be an appropriate solution; if not what would be the best way of doing this in an efficient manner? Also, I'm not quite sure what to tag this as so if you have any recommendations let me know and I'll as them as I see fit. EDIT: I'll need this method to work on Windows, OS X, and ideally Linux

    Read the article

  • Questions about linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

  • How can we receive a volume attaching notification

    - by Benjamin
    When a volume is attached to file system, on Windows, the Window explorer detects the volume and refreshes automatically. I wonder the technique. How do an program(include device driver) get the notification? -Of course, it doesn’t mean a polling. I want to get an event(or a message). I would like to get the notification when a network volume(like SMB) is attached. Thanks in advance.

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Avoiding problem of overwriting files which are in use

    - by zaf
    For example on a high traffic web server. To reduce problems when switching a file I usually rename the old file out and then rename in the new file. I was told some time ago that renaming a file does not change the 'inode data' so that processes reading the file can keep doing so without glitches. And, of course, rather than copying in the new file it is faster and safer to rename a temp copy. Is this still best practice and if not what do you do?

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • How can I get a writable path on the iPhone?

    - by Kendall Helmstetter Gelner
    I am posting this question because I had a complete answer for this written out for another post, when I found it did not apply to the original but I thought was too useful to waste. Thus I have also made this a community wiki, so that others may flesh out question and answer(s). If you find the answer useful, please vote up the question - being a community wiki I should not get points for this voting but it will help others find it How can I get a path into which file writes are allowed on the iPhone? You can (misleadingly) write anywhere you like on the Simulator, but on the iPhone you are only allowed to write into specific locations.

    Read the article

  • How to store millions of pictures about 2k each in size

    - by LuftMensch
    We're creating an ASP.Net MVC site that will need to store 1 million+ pictures, all around 2k-5k in size. From previous ressearch, it looks like a file server is probably better than a db (feel free to comment otherwise). Is there anything special to consider when storing this many files? Are there any issues with Windows being able to find the photo quickly if there are so many files in one folder? Does a segmented directory structure need to be created, for example dividing them up by filename? It would be nice if the solution would scale to at least 10 million pictures for potential future expansion needs.

    Read the article

  • Check whether a string is a valid filename with Qt

    - by ereOn
    Hi, Is there a way with Qt 4.6 to check if a given QString is a valid filename (or directory name) on the current operating system ? I want to check for the name to be valid, not for the file to exist. Examples: // Some valid names test under_score .dotted-name // Some specific names colon:name // valid under UNIX OSes, but not on Windows what? // valid under UNIX OSes, but still not on Windows How would I achieve this ? Is there some Qt built-in function ? I'd like to avoid creating an empty file, but if there is no other reliable way, I would still like to see how to do it in a "clean" way. Many thanks.

    Read the article

  • FileInputStream for a generic file System

    - by Akhil
    I have a file that contains java serialized objects like "Vector". I have stored this file over Hadoop Distributed File System(HDFS). Now I intend to read this file (using method readObject) in one of the map task. I suppose FileInputStream in = new FileInputStream("hdfs/path/to/file"); wont' work as the file is stored over HDFS. So I thought of using org.apache.hadoop.fs.FileSystem class. But Unfortunately it does not have any method that returns FileInputStream. All it has is a method that returns FSDataInputStream but I want a inputstream that can read serialized java objects like vector from a file rather than just primitive data types that FSDataInputStream would do. Please help!

    Read the article

  • Powerpoint file can be deleted without consequence

    - by John Maloney
    I am working on a license management type application that copies a password protected zip file to the applications root. The user clicks a button "Open Presentation" and the zipped file is extracted into the root folder and then I use the Office interop to open the file in Powerpoint. At this point to my surprise I am able to delete the extracted file that is currently open in the Powerpoint application. I had assumed that trying to delete the file would fail as the file is still open in Powerpoint. Why is it allowing me to delete the file? Is the file somehow copied to a temp folder and then opened in PowerPoint? Can I move forward with the application relying on this ability to delete the file as soon as it is opened in Powerpoint? This would be optimal because it helps insure that the file cannot be copied(I am also using the xml to stop "Save As" and "Save" from appearing int Powerpoint). Thanks for the insight, John

    Read the article

  • Adding custom/new properties to any file regardless of type and extension e.g. setting 'Author' on a

    - by Vaibhav Garg
    I want the ability add properties and tags to a file (specifically ebook files and ebook related properties in Windows 7 but interested to go so for as many OSes as possible) For e.g. Example.txt or Example.doc or Example.epub should all store and carry properties like 'Author', 'Publication date', 'Tags' etc.. the properties should be stored with the file itself. Such that if it is transferred to another system it retains the properties (even if i need to install 'my app' to support this function on the other machine) How do I make this possible using .net (preferred) and what file system concepts should I learn to understand the underlying concepts and limitations to be able to implement this feature? Any application that already does this? Thank you

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Linux - How do i know the block map of the given file and/or the free space map of the partition.

    - by Inso Reiges
    Hello, I am on Linux and need to know either of the two things: 1) If i have a regular file on some file system on a partition under Linux is there a way to know the set of the physical blocks that this file occupies on the drive from user space? Or at least the set of the file system's clusters? 2) Is there a way to get the same information about the whole free space of the given file system? In both cases i understand that if there is any possible way to extract this info it will probably be totally unsafe and racy (anything could happen to these set of blocks between the time i see them and act on them somehow). I also really don't want an implementation that will have to know a lot about every filesystem.

    Read the article

  • Implementing Qt File Dialog with a Different File System Library (boost)

    - by knight
    Hi, I am writing an application which requires me to use another file system and file engine handlers and not the qt's default ones. Basically what I want to be able to do is to use qt's file dialog but have an underlying file system handler (for example built using boost file system library) of mine handling all the operations with regards to file and directory operations within that dialog. I have already written a custom file engine which handles some of the operations but I am now stuck with Qt's file system model and the file system watcher engine, as I need to have the signals transmitted for this custom file engine. Seems like I have a daunting task ahead. Am I heading in the right direction? Is there any other simpler way that I could implement this? Can anyone give me any idea on how to proceed. I was thinking of looking into proxy models but not sure if that would work. Thanks in advance for any help.

    Read the article

  • Reading from a file not line-by-line

    - by MadH
    Assigning a QTextStream to a QFile and reading it line-by-line is easy and works fine, but I wonder if the performance can be inreased by first storing the file in memory and then processing it line-by-line. Using FileMon from sysinternals, I've encountered that the file is read in chunks of 16KB and since the files I've to process are not that big (~2MB, but many!), loading them into memory would be a nice thing to try. Any ideas how can I do so? QFile is inhereted from QIODevice, which allows me to ReadAll() it into QByteArray, but how to proceed then and divide it into lines?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >