Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 22/101 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • Drive system file size

    - by rezx
    When i made a new drive it take some space for system file FAT32 take the less space, then NTFS, then ext4 my question how to know the space will be taken for the system before make the drive, if the drive 1giga or 100giga for FAT32, NTFS, ext4. Edit: when make 10MB drive with FAT32 the size shown 9.9 when make 10MB drive with ext4 the size shown 8.1 the same thing with the bigger size there always some space used and there is no files on the drive, so where this space go, if it for the filesystem how i can calculate the space that will be taken before format the drive

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • using one disk as cache for others

    - by HugoRune
    Hi Given a PC with several hard drives: Is it possible to use one fast disk as a giant file cache? I.e. automatically copying frequently accessed data to that one disk, and transparently redirecting reads and writes to that disk, so that other drives would only have be accessed occassionally. (writes would have to be forwarded to the other disks after a while of course) Advantages: the other drives could be powered down most of the time; reducing power, heat, noise speed of the other drives would not matter much. cache disk could be solid state. How can I set such a system up? What OS supports these options? Is this possible at all using Windows or Linux?

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • Trying to move Users And Program Files Directory's to another partition

    - by Jharwood
    Currently I've Followed this Guide: http://lifehacker.com/5467758/move-the-users-directory-in-windows-7 Pointed my C:\Users, C:\Program Files (x86), C:\Program Files directory's to their respective counterparts on the B: drive. I used mklink /J D:\Users B:\Users (D was the C: drives name in recovery) but when I come to boot, all I get is that the profile can't be loaded. I have to accomplish this, and don't really mind reinstalling as its a fresh install anyway.

    Read the article

  • How to cleanup tmp folder safely on Linux

    - by Syncopated
    I use RAM for my tmpfs /tmp, 2GB, to be exact. Normally, this is enough but sometimes, processes create files in there and fail to cleanup after themselves. This can happen if they crash. I need to delete these orphaned tmp files or else future process will run out of space on /tmp. How can I safely garbage collect /tmp? Some people do it by checking last modification timestamp, but this approach is unsafe because there can be long-running processes that still need those files. A safer approach is to combine the last modification timestamp condition with the condition that no process has a file handle for the file. Is there a program/script/etc that embodies this approach or some other approach that is also safe? Incidentally, does Linux/Unix allow a mode of file opening with creation wherein the created file is deleted when the creating process terminates, even if it's from a crash?

    Read the article

  • What are 'damaged files' on external hard drive (HFS format for OS X)?

    - by dtlussier
    I have an external HD formatted to default HFS (Mac OS Extended - Journaled) and very once and a while I get a folder called DamagedFiles in the root of the volume. The folder contains a collection of links to files on the drive. In general the files seem fine as I am for example able to open the images or text files without a problem. Is this serious? What can I do to fix this problem? Any advice would be great as I couldn't find anything on here or via Google that addressed this problem in particular. Many thanks.

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Does ZFS replace the need for hardware/software RAID?

    - by user53744
    I want to provide protection against data loss on my servers. Typically, I'd use hardware RAID 1 or 5, but I've been reading up on ZFS. Is it correct that ZFS itself provides RAID 1 or 5 like data protection WITHOUT needing a RAID controller card? If so, I assume a single hard drive is not enough to provide data protection since if that drive fails, all data fails, so how many hard drives do I need to be running for ZFS to provide this protection?

    Read the article

  • Proving file creation dates

    - by Nils Munch
    In a weird case surrounding copyrights of a software system I have developed, I use the fact that I have all the source files of the system in question, created long before I joined the company that claims to own the system. The company being sued by yours truely says that I have simply manipulated to files to appear to be from that date. Is it even possible to fake or manipulate creation dates ? And if so, how can I "prove" that the files really are that old ? Luckily, I stored my project on GitHub, whick confirmed the fact that the files are from that era, but that is besides the point. I run purely Apple OS X.

    Read the article

  • Hiding a directory through the FAT table

    - by hennobal
    I've looked into the FAT file system, trying to find a way to make a directory hidden from view of the user. This has been done with malware previously, so it should be possible. The SpyEye trojan hid inside a directory C:\cleansweep.exe\ which was only reachable through the command line. I know deletion is possible by substituting the first character of the directory in the FAT table with 0xE5, but then it will not be accessible. Any ideas on how the scenario from SpyEye can be recreated? Any filesystem is interesting, but ideally FAT or NTFS.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • VFS and FS i-node difference

    - by gaffcz
    What is the difference between VFS i-node and FS (e.g. EXT) i-node? Is it possible that EXT i-node is persistent (contains/points to data blocks), but VFS i-node is created just in i-node cache after read/use of EXT i-node? Or the VFS i-node is just an image of FS i-node (it's the same) and i-nodes in those systems, which are not working with i-nodes (e.g. FAT, NTFS) has to be emulated (HOW?) to allow VFS work with those FS like they would support i-nodes?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • Should I use VFAT or ext3 for a 1Tb external usb hard drive?

    - by ihuston
    I have a 1 Tb USB external hard drive which I want to use to backup data from my home and office desktops (both running Linux). Should I format the drive (possibly split into a few partitions) as vfat or ext3? I don't anticipate using the drive with Windows very often so this is not a primary concern. The main thing holding me back from just using ext3 is the problems you can have when two different users (home and work accounts) try to access each others data. Is there any way to mount an ext3 drive with user id mapping?

    Read the article

  • How to work around Windows error 8x90070057?

    - by Chris
    So today I was trying to copy a simple .PDF file from a local folder on my machine to a network folder and every time I tried to move the file I would get an error dialog box which would state that I was passing a wrong parameter and give me the error code of 8x90070057. Does anyone know of a way to work around this error so that I can get this file copied? I have tried renaming the file with an underscore in front. I have tried copying from my local folder to my desktop and then to the remote folder. I have tried hunting down anything that might be using the file. An example of the file name is: Flowers & Trees.pdf

    Read the article

  • Trying to move Users And Program Files Directories to Another Partition

    - by Jharwood
    Currently I've Followed this guide. I pointed my C:\Users, C:\Program Files (x86), and C:\Program Files directories to their respective counterparts on the B: drive. I used mklink /J D:\Users B:\Users (D was the C: drives name in recovery) but when the computer boots, all I get is that the profile can't be loaded. I have to accomplish this, and don't really mind reinstalling as its a fresh install anyway.

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • What is the best vfat driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • What is the best file system to use for a second hard drive when dual booting between WinXP and Win7

    - by Corey
    What is the best file system to use for a second hard drive when dual booting between WinXP and Win7? I am dual booting for legacy reasons, and I have a 2nd internal drive that I would like to use from both XP and 7. Should I go with the standard NTFS? (will the secuirty features be an issue, with different SIDs from the different users) Should I go with FAT32? Should I try out the new exFAT? Also, I curently have two of my 3 drives as "dynamic disks" and 1 spaned volume created on them. (i did this from XP) Win7 can see them/it fine. Is this an ok thing to do?

    Read the article

  • running a web server with encrypted file system (all or part of it)

    - by Carlos
    Hi, I need a webserver (lamp) running inside a virtual machine (#1) running as a service (#2) in headless mode (#3) with part or the whole filesystem encrypted (#4). The virtual machine will be started with no user intervention and provide access to a web application for users in the host machine. Points #1,#2 and #3 are checked and proved to be working fine with Sun VirtualBox, so my question is for #4: Can I encrypt the all filesystem and still access the webserver (using a browser) or will grub ask me for a password? If encrypting the all filesystem is not an option, can I encrypt only /home and /var/www ? will apache/php be able to use files in /home or /var/www without asking for a password or mounting these partitions manually? Thanks

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >