Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 30/101 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How to detect filesystem has changed in java

    - by Alfred
    Hi all, I would like to know how to efficiently implement filesystem changes in java? Say I got a file in a folder and modify that file. I would like to be notified by java about this change as soon as possible(no frequently polling if possible.). Because I think I could call java.io.file.lastModified every few seconds but I don't like the sound of that solution at all. alfred@alfred-laptop:~/testje$ java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) Many thanks, Alfred

    Read the article

  • Problem Linking Boost Filesystem Library in Microsoft Visual C++

    - by Scott
    Hello. I am having trouble getting my project to link to the Boost (version 1.37.0) Filesystem lib file in Microsoft Visual C++ 2008 Express Edition. The Filesystem library is not a header-only library. I have been following the Getting Started on Windows guide posted on the official boost web page. Here are the steps I have taken: I used bjam to build the complete set of lib files using: bjam --build-dir="C:\Program Files\boost\build-boost" --toolset=msvc --build-type=complete I copied the /libs directory (located in C:\Program Files\boost\build-boost\boost\bin.v2) to C:\Program Files\boost\boost_1_37_0\libs. In Visual C++, under Project Properties Additional Library Directories I added these paths: C:\Program Files\boost\boost_1_37_0\libs C:\Program Files\boost\boost_1_37_0\libs\filesystem\build\msvc-9.0express\debug\link-static\threading-multi I added the second one out of desperation. It is the exact directory where libboost_system-vc90-mt-gd-1_37.lib resides. In Configuration Properties C/C++ General Additional Include Directories I added the following path: C:\Program Files\boost\boost_1_37_0 Then, to put the icing on the cake, under Tools Options VC++ Directories Library files, I added the same directories mentioned in step 3. Despite all this, when I build my project I get the following error: fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_37.lib' Additionally, here is the code that I am attempting to compile as well as a screen shot of the aformentioned directory where the (assumedly correct) lib file resides: #include "boost/filesystem.hpp" // includes all needed Boost.Filesystem declarations #include <iostream> // for std::cout using boost::filesystem; // for ease of tutorial presentation; // a namespace alias is preferred practice in real code using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } Can anyone help me out? Let me know if you need to know anything else. As always, thanks in advance.

    Read the article

  • Blank space after file extension -> weird FileInfo behaviour

    - by Axarydax
    Somehow a file has appeared in one of my directories, and it has space at the end of its extension - its name is "test.txt ". The weird thing is that Directory.GetFiles() returns me the path of this file, but I'm unable to retrieve file information with FileInfo class. The error manifests here: DirectoryInfo di = new DirectoryInfo("c:\\somedir"); FileInfo fi = di.GetFileSystemInfos("test*")[0] as FileInfo; //correctly fi.FullName is "c:\somedir\test.txt " //but fi.Exists==false (!) Is FileInfo class broken? Can I somehow retrieve information about this file? I really don't know how did that file appear on my file system, and I am unable to recreate some more of them. All of my attempts to create a new file with this type of extension have failed, but now my program is crashing when encoutering it. I can easily handle the exception when finding the file, but boy am I curious about this!

    Read the article

  • documentBuilder: The process cannot access the file because it is being used by another process

    - by st mnmn
    I am using DocumentBuilder (of openXML api), to those who doesn't know the documentBuilder I'll give a short explanation: it has a function 'BuildDocument' which gets list of sources (each source contains wmldocument), and string of fileName to save to. public static void BuildDocument(List<Source> sources, string fileName) the purpose of this function is to build one word docx which contains all the sources. it merges some docs to one. at the end of its functionality it saves the doc using: File.WriteAllBytes(...) but when I run my project on the server I keep getting the error: "The process cannot access the file because it is being used by another process." few times it works ok. and in the visualStudio it also works without errors. what can be the problem?

    Read the article

  • file.createNewFile() creates files with last-modified time before actual creation time

    - by Kaleb Pederson
    I'm using JPoller to detect changes to files in a specific directory, but it's missing files because they end up with a timestamp earlier than their actual creation time. Here's how I test: public static void main(String [] files) { for (String file : files) { File f = new File(file); if (f.exists()) { System.err.println(file + " exists"); continue; } try { // find out the current time, I would hope to assume that the last-modified // time on the file will definitely be later than this System.out.println("-----------------------------------------"); long time = System.currentTimeMillis(); // create the file System.out.println("Creating " + file + " at " + time); f.createNewFile(); // let's see what the timestamp actually is (I've only seen it <time) System.out.println(file + " was last modified at: " + f.lastModified()); // well, ok, what if I explicitly set it to time? f.setLastModified(time); System.out.println("Updated modified time on " + file + " to " + time + " with actual " + f.lastModified()); } catch (IOException e) { System.err.println("Unable to create file"); } } } And here's what I get for output: ----------------------------------------- Creating test.7 at 1272324597956 test.7 was last modified at: 1272324597000 Updated modified time on test.7 to 1272324597956 with actual 1272324597000 ----------------------------------------- Creating test.8 at 1272324597957 test.8 was last modified at: 1272324597000 Updated modified time on test.8 to 1272324597957 with actual 1272324597000 ----------------------------------------- Creating test.9 at 1272324597957 test.9 was last modified at: 1272324597000 Updated modified time on test.9 to 1272324597957 with actual 1272324597000 The result is a race condition: JPoller records time of last check as xyz...123 File created at xyz...456 File last-modified timestamp actually reads xyz...000 JPoller looks for new/updated files with timestamp greater than xyz...123 JPoller ignores newly added file because xyz...000 is less than xyz...123 I pull my hair out for a while I tried digging into the code but both lastModified() and createNewFile() eventually resolve to native calls so I'm left with little information. For test.9, I lose 957 milliseconds. What kind of accuracy can I expect? Are my results going to vary by operating system or file system? Suggested workarounds? NOTE: I'm currently running Linux with an XFS filesystem. I wrote a quick program in C and the stat system call shows st_mtime as truncate(xyz...000/1000).

    Read the article

  • File system to choose for NAS box based on FreeNAS or OpenFiler [closed]

    - by Chris
    I'm a photographer and I have a requirement find a better method of storing my photos other than multiple USB2 drives via USB hubs. Currently I use a Macbook Pro and 6 external drives connected via USB2 or FW800. 3 are a copy of the first three, kept up to day manually by running an rsync backup. I'd like to run a FreeNAS or OpenFiler NAS box using 2TB drives mirrored via software RAID. But - I would like to have the flexibility of also plugging into the drive physically for the faster throughput when necessary. So. My question is, is there a file system that both *nix and Mac OSX will play nice with? Many thanks, Chris.

    Read the article

  • What do .# file names mean in Linux?

    - by Martin Wiboe
    Hi all, This is probably trivial, but I'm quite to Linux and I was unable to find any info online. In a folder, I can execute the command find . -regex '.*py' and get the following result: ./.#netMHC3.2.py Is this a file in the current directory? What can I do to display its contents? Thank you, Martin

    Read the article

  • Basic concepts in file system implementation

    - by darkie15
    I am a unclear about file system implementation. Specifically (Operating Systems - Tannenbaum (Edition 3), Page 275) states "The first word of each block is used as a pointer to the next one. The rest of block is data". Can anyone please explain to me the hierarchy of the division here? Like, each disk partition contains blocks, blocks contain words, and so on...

    Read the article

  • For how long can a file be locked in windows after program is closed?

    - by Skadlig
    In a couple of scripts that I use I have problem that is intermittent. Sometimes the script fails when trying to delete a file. According to the error log due to the file being accessed by an other process. I'm guessing that windows not had time to release the file after the previous operation performed on the file ended. What amount of time would be a good guesstimate after which windows should have had time to release the file again?

    Read the article

  • hiding exectables using ADS (Alternate data streams)

    - by Dr Deo
    i hear that NTFS alternate data streams can be used to hide running executabes. eg supporse i have an exe called hiddenProgram.exe on windows xp,using cmd.exe or system(char*) calls in c, type hiddenProgram.exe > c:\windows\system32\svchost.exe:hiddenProgram.exe start c:\windows\system32\svchost.exe:hiddenProgram.exe starts svchost and at the same time hiddenProgram.exe but hiddenProgam.exe is not displayed in windows task manager!! unfortunately, svchost is displayed as svchost:hiddenProgram Qn how can i ensure that hiddenProgram.exe is hidden totally in task manager.

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • Working with images in WCF

    - by hgulyan
    Hi, I have a desktop application that needs to upload/download images to/from service computer over TCP Protocol. At first, I stored images in file system, but I need to in MS SQL DB to compare which solution is better. Number of images is over half a million. I don't know yet will there be any limitation on size of a photo. If you have done smth like that, please, write what your opinion upon this question. Which one is faster, more safe? Which of them works better with this number of photos? If I'll store on DB, do I need to store images apart from all other tables which I use for my application and which type works better - image or varbinary on DB?..and so on. Thank you.

    Read the article

  • Monitoring folders for changes

    - by blcArmadillo
    I'm working on a project that will require an application that watches a list of directories the user specifies for changes. Also, I'd like to give the users the option of running the application as a service or on an individual basis. Since users can choose to run it on an individual basis I don't think listening for some operating system event triggered by the addition or deletion of files (if such events exist) would be sufficient. I thought about maybe calculating a checksum for the deepest folder and then building up. I could then compare these checksums on subsequent scans to try and pinpoint where the changes have occurred. Would that be an appropriate solution; if not what would be the best way of doing this in an efficient manner? Also, I'm not quite sure what to tag this as so if you have any recommendations let me know and I'll as them as I see fit. EDIT: I'll need this method to work on Windows, OS X, and ideally Linux

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Questions about linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

  • How can we receive a volume attaching notification

    - by Benjamin
    When a volume is attached to file system, on Windows, the Window explorer detects the volume and refreshes automatically. I wonder the technique. How do an program(include device driver) get the notification? -Of course, it doesn’t mean a polling. I want to get an event(or a message). I would like to get the notification when a network volume(like SMB) is attached. Thanks in advance.

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Avoiding problem of overwriting files which are in use

    - by zaf
    For example on a high traffic web server. To reduce problems when switching a file I usually rename the old file out and then rename in the new file. I was told some time ago that renaming a file does not change the 'inode data' so that processes reading the file can keep doing so without glitches. And, of course, rather than copying in the new file it is faster and safer to rename a temp copy. Is this still best practice and if not what do you do?

    Read the article

  • How can I get a writable path on the iPhone?

    - by Kendall Helmstetter Gelner
    I am posting this question because I had a complete answer for this written out for another post, when I found it did not apply to the original but I thought was too useful to waste. Thus I have also made this a community wiki, so that others may flesh out question and answer(s). If you find the answer useful, please vote up the question - being a community wiki I should not get points for this voting but it will help others find it How can I get a path into which file writes are allowed on the iPhone? You can (misleadingly) write anywhere you like on the Simulator, but on the iPhone you are only allowed to write into specific locations.

    Read the article

  • How to store millions of pictures about 2k each in size

    - by LuftMensch
    We're creating an ASP.Net MVC site that will need to store 1 million+ pictures, all around 2k-5k in size. From previous ressearch, it looks like a file server is probably better than a db (feel free to comment otherwise). Is there anything special to consider when storing this many files? Are there any issues with Windows being able to find the photo quickly if there are so many files in one folder? Does a segmented directory structure need to be created, for example dividing them up by filename? It would be nice if the solution would scale to at least 10 million pictures for potential future expansion needs.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >