Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 3/114 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • filesystem compatible with freenas and windows

    - by Daniel
    Hi all, I'm planning on using FreeNAS (was considering openfiler but freenas seems simpler) for my home NAS box running off ESXI. I have managed to get local sata drives to mount in ESXI (http://serverfault.com/questions/216902/esxi-add-datastore-without-partitioning). I've had one of the drives fail on my before and I was able to retrieve most of the data off it using windows tools (I'm not much of a linux guy I know enough to be dangerous!). If I go the freenas route in the event that something goes bad what would be the best file system to use so that I could pop the drive out of the freenas box (vm) and put it in another pc running windows so I could try and run various recovery tools to get the data back. All in all its not a major problem if I lose the data just would be a bit annoying, so I'm not looking for suggestions around backing up etc. I was considering using NTFS that the drives are already formatted as but it appears that while freenas does support NTFS that its a bit buggy and not 100% reliable, anyone know if this is still true? Read that on a forum somewhere.

    Read the article

  • Is a ext3 Linux filesystem byte order independent

    - by Lothar
    I have a good old HP-C3700 Workstation with PA-RISC CPU here that I would like to use as a subversion server for a very large repository. I just worry what happens if the workstation dies (everybody who knows this computer knows that it is running like an Abrams tank and unlikely to happen in the next decade). I'm using Debian Linux on this system. If the mainboard dies can I just plug the SCSI drive into a PC and read the files from a normal Intel Linux PC? Which software RAID levels would be safe?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • How can I "shadow" the filesystem on Linux?

    - by happy_emi
    On a Linux environment sometimes I need to run a script as root which will add/modify serveral files on my fs. Basically I'd like to know exactly which files are modified and how WITHOUT opening the script and trying to guess the code. I was thinking about using something like unionfs: the main fs would be accessible in readonly mode and all changes are written on a file used as a partition and "mounted" in write mode. Are there other ways to achieve the same goal (i.e. other than unionfs)?

    Read the article

  • Moving files within an ext4 filesystem?

    - by HT74
    I'd like to decrease the access-time for some files by moving them to the beginning of the fs. Task 1: Clear a certain block range at the beginning of the fs (moving existing files to free space elsewhere). Task 2: Move the files in question to that block range (should be able to grow a bit). How would I do that?

    Read the article

  • What is the correct fstab entry for automounting a remote filesystem?

    - by user101815
    I'm running Kubuntu 13.10 and I'm trying to arrange for a remote filesystem to be automounted on startup. I have the following entry for it in fstab: hpmediavault:/shares/Volume1/FileShare /mnt/mediavault nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0 I can mount it perfectly well with sudo mount /mnt/mediavault But what does it take so that it will be mounted on startup with no explicit action on my part?

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • iPhone file corruption

    - by sfider
    Is it possible (on iPhone/iPod Touch) for a file written like this: if (FILE* file = fopen(filename, "wb")) { fwrite(buf, buf_size, 1, file); fclose(file); } to get corrupted, e.g. when app is forced to terminate? From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.

    Read the article

  • How does "rm" on a NTFS filesystem differs from Window's own implementation?

    - by DavideRossi
    I have an external USB disk with an NTFS filesystem on it. If I remove a file from Windows and I run one of the several "undelete" utilities (say, TestDisk) I can easily recover the file (because "it's still there but it's marked as deleted"). If I remove the file from Linux no utility (unless I use a deep-search signature-based one) can recover the file. Why? How is unlink implemented in Linux's NTFS file system code? It looks like it does not just "mark it as undeleted" but it wipes away some on-disk structure, is this the case?

    Read the article

  • Command line option to check which filesystem I am using?

    - by j-g-faustus
    Is there a command that will show which file system (ext3, ext4, FAT32, ...) the various partitions and disks are using? Similar to how sudo fdisk -l lists information about disks and partitions? Update Accepted the "mount" answer as mount works without specifying filesystem type (commenting out the relevant entries in fstab, if any): $ sudo mount /dev/sdf1 /mnt/tmp $ mount | grep /mnt/tmp /dev/sdf1 on /mnt/tmp type ext3 (rw) Found another option in ubuntuforums - blkid: # system disk $ sudo blkid /dev/sda1 /dev/sda1: UUID="...." TYPE="ext4" # USB disk: $ sudo blkid /dev/sdf1 /dev/sdf1: LABEL="backup" UUID="..." TYPE="ext3" # mdadm RAID: $ sudo blkid /dev/md0 /dev/md0: LABEL="raid" UUID="..." TYPE="ext4" Thanks for your help!

    Read the article

  • faking a filesystem / virtual filesystem

    - by attwad
    I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts). The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how. What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user. Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • Why am I getting 'Heap Corruption'?

    - by fneep
    Please don't crucify me for this one. I decided it might be good to use a char* because the string I intended to build was of a known size. I am also aware that if timeinfo-tm_hour returns something other than 2 digits, things are going to go badly wrong. That said, when this function returns VIsual Studio goes ape at me about HEAP CORRUPTION. What's going wrong? (Also, should I just use a stringbuilder?) void cLogger::_writelogmessage(std::string Message) { time_t rawtime; struct tm* timeinfo = 0; time(&rawtime); timeinfo = localtime(&rawtime); char* MessageBuffer = new char[Message.length()+11]; char* msgptr = MessageBuffer; _itoa(timeinfo->tm_hour, msgptr, 10); msgptr+=2; strcpy(msgptr, "::"); msgptr+=2; _itoa(timeinfo->tm_min, msgptr, 10); msgptr+=2; strcpy(msgptr, "::"); msgptr+=2; _itoa(timeinfo->tm_sec, msgptr, 10); msgptr+=2; strcpy(msgptr, " "); msgptr+=1; strcpy(msgptr, Message.c_str()); _file << MessageBuffer; delete[] MessageBuffer; }

    Read the article

  • Boost Filesystem Library Visual C++ Compile Error

    - by John Miller
    I'm having the following issue just trying to compile/run some of the example programs with the Boost Filesystem Library. I'm using MS-Visual C++ with Visual Studio .NET (2003). I've installed the Boost libraries, version 1.38 and 1.39 (just in case there was a version problem) using the BoostPro installers. If I just try to include /boost/filesystem/operations.hpp I receive the following error: \boost_1_38\boost\system\error_code.hpp(230) : error C2039: 'type' : is not a member of 'boost::enable_if<boost::system::is_error_condition_enum<Cond,boost::detail::enable_if_default_T>' Any help is greatly appreciated. Thank you!

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • SharePoint Database security corruption

    - by H(at)Ni
    Hello, One time I faced an issue where my customer is having an HTTP 500 internal server error while trying to access any SharePoint site. The problem appeared once he moved back and forth with inheriting/breaking inheritance of permissions over different levels in the site collection. "Security corruption in database" sounds very tough for a customer running a production portal with a backup that can make him lose around 3 weeks of valuable data. However, the solution tends not to be that hard, there's an stsadm command that help us detect the corruption and even delete the orphaned items causing the corruption. Follow these steps: a. stsadm -o databaserepair -url http://SITEURL -databasename DBNAME                and it returned some orphaned items.            b. stsadm -o databaserepair -url http://SITEURL -databasename DBNAME -deletecorruption                and it removed the orphaned items. Cheers,

    Read the article

  • Corrupt file indicative of corrupt hard drive?

    - by Elipsicon
    I have noticed that two files on my (almost full) 2 TB hard drive have been corrupted. One file has 20 kB (!) corrupted, i.e. consecutive 20 kB have changed, even though the modification date of the file hasn't changed and I haven't worked with this file for over a year. This tells me that something "below" the file system level has messed with the data and the only thing I can think of is hardware failure, most likely hard disk failure. I've tested my RAM already and it works flawlessly. I'm using ext4 on Linux, if that is of any help. Is this normal? Is it time to change my hard drive disk before something worse happens? What can I do to prevent that from happening in the future? Is there some built-in feature of, or extension to ext4 that includes additional error correcting code and/or watches files for changes that haven't been caused by the OS?

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • Run disk error check on NTFS file?

    - by paulius_l
    I have a feeling that my system hard drive is dying. Benchmark kind of enforces it. Here is the benchmark of my system hard drive during low system activity: And here is the benchmark of backup drive: Furthermore, there are some files which I just can't touch because I get CRC errors and the hard drive activity spikes to 100% with operating speeds less than 1 MB/s while working with such files. I haven't yet tried swapping SATA cable as I have read this might cause the problems. Anyway, I would like to run some tests on specific clustsers where those files I am interested in are stored. I don't want to do the full chkdsk because it takes a very long time. I would like to either find the utility which executes the disk check directly on the clusters where the file belongs or a couple utilities where one tells me the cluster locations and another can check just those locations. How do I check and possibly fix disk errors where the files I am interested in are stored? Edit: S.M.A.R.T. info:

    Read the article

  • filesystem mounting problem

    - by user306988
    Hello, A DAS box is attached to my linux box using LSI SCSI HBA. Volume is properly detected on the linux box and filesystem is created using mkfs.ext2 /dev/sdc #No partition table I can not mount the volume using mount/dev/sdc /mnt/temp -t ext3 But I can mount it using mount /dev/sdc /mnt/temp -t ext3 -o loop Can anybody please tell me what "-o loop" option does internally? Has anybody faced this option before? Thanks in advance, prashant

    Read the article

  • How to save svg canvas to local filesystem

    - by dr jerry
    Is there a way to allow a user, after he has created a vector graph on a javascript svg canvas using a browser, to download this file to their local filesystem? SVG is a total new field for me so please be patient if my wording is not accurate. kind regards, Jeroen.

    Read the article

  • The volume "filesystem root" has only 0 bytes disk space remaining?

    - by radek
    I installed 11.10 ~two weeks ago and run into some strange troubles recently. Installation was on brand new laptop with clear 128GB SSD. I opted for encrypting home directory. Apart from that I accepted defaults during the installation. There is no other OS on my laptop. I had circa 40GB in use when (for the third time) I got to see this very unpleasant window: Twice situation was pretty bad and whole system slowed down considerably. After reboot I could not login to graphical interface (with an error message informing about insufficient space) and had to remove some files from command line first. Third time I still managed to quickly delete some files and it helped. My laptop is mainly work environment: so no torrents, games, just two movies. Only media filling space are ~20GB of pictures, and bunch of pdfs. Working mostly on PostgreSQL & PostGIS, GeoServer and QGIS recently. Although I had lots of opportunities to test and practice my backups I would be extremely grateful if somebody could point me to any potential solutions to this problem. My laptop has been bought just before I installed Ubuntu, and it came without OS. Could that be hardware issue? Or is the encrypted home causing me headaches? Thanks for help! Update: As suggested by @maniat1k, here is current output of fdisk -l: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 312581807 156290903+ ee GPT

    Read the article

  • Where can I find the application executables in the filesystem?

    - by richzilla
    Where are executables for programs stored in Ubuntu? An application (Komodo Edit) is asking me to identify an application to be used as a web browser. I've become used to just entering the application name as a command for situations such as these, but this scenario got me thinking. I know in Windows it would just be the relevant application folder in the 'program files' folder, but I'm assuming things are a bit different on Linux? I thought somewhere like bin would be logical but this appears to standard Linux/Unix applications. Where would I find the binary executable for applications stored on my system?

    Read the article

  • How to navigate to connected iPhone? Filesystem path to `afc://`?

    - by Torben Gundtofte-Bruun
    When I connect my iPhone to Ubuntu 11.10, it gets automounted as afc://ca60751cc4b1ebb427c2f9da324914b0643a21f8/ and I can see that there are photos stored in afc://ca60751cc4b1ebb427c2f9da324914b0643a21f8/DCIM/103APPLE. I would like to use programs to access these photo files, but they don't accept the afc:// protocol -- they accept only normal file system paths. Is there a "normal file system equivalent" to afc:// paths?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >