Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 4/114 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Compose path (with boost::filesystem)

    - by ypnos
    I have a file that describes input data, which is split into several other files. In my descriptor file, I first give the path A that tells where all the other files are found. The originator may set either a relative (to location of the descriptor file) or absolute path. When my program is called, the user gives the name of the descriptor file. It may not be in the current working directory, so the filename B given may also contain directories. For my program to always find the input files at the right places, I need to combine this information. If the path A given is absolute, I need to just that one. If it is relative, I need to concatenate it to the path B (i.e. directory portion of the filename). I thought boost::filesystem::complete may do the job for me. Unfortunately, it seems it is not. I also did not understand how to test wether a path given is absolute or not. Any ideas?

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • trouble with boost::filesystem::wrecursive_directory_iterator

    - by Dogmatixed
    I'm trying to write a program to help me manage my iTunes library, including removing duplicates and cataloging certain things. At this point I'm still just trying to get it to walk through all the folders, and have run into a problem: I have a small amount of Japanese music, where the artist and/or album is written in Japanese characters. Because of how iTunes arranges things in its library the directories contain these characters. "shouldn't be a problem, though." I thought, because the boost::filesystem library has a wide character version of its recursive iterator. but when I actually try to use it, it seems to completely stop when it hits the first Japanese char. complete stop as in it doesn't finish printing the line, no carriage return or anything. now, I'm still pretty new to programming, so I'm assuming it's my mistake, anyone know why this is happening? here's what I think is the relevant code: fs::wrecursive_directory_iterator end_it; int i; try { for(fs::wrecursive_directory_iterator rec_it(full_path); rec_it != end_it; ++rec_it) { for(i = 0; i < rec_it.level(); i++) { out << "\t"; } out << rec_it->string() << std::endl; } } catch(std::exception e) { out << "something went wrong: " << e.what(); } and from my output file, minus some of the path: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/ finally, what I expect: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/??? /Test Libs/Lib1/Bring it on /Test Libs/Lib1/04 Bring it on.mp3 any thoughts? Thanks.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Does the FAT filesystem have a signature?

    - by DxCK
    Given the following BPB: The "MSWIN4.1" string is just the "OEM ID" field, and by Microsoft documentation it should not be used to identify FAT volumes. The "FAT32 " string is the BS_FilSysType field, and by Microsoft documentation it should not be used to identify FAT volumes either. So how do i identify that the volume is formatted to FAT? Is there any reliable signature I can relay on?

    Read the article

  • Filesystem synchronization library?

    - by IsaacB
    Hi, I've got 10 GB of files to back up daily to another site. The client is way out in the country so bandwidth is an issue. Does anyone know of any existing software or libraries out there that help with keeping a folder with its files synchronized across a slow link, that is it only sends files across if they have changed? Some kind of hash checking would be nice, too, to at least confirm the two sides are the same. I don't mind paying some money for it, seeing as how it might take me several weeks to a month to implement something decent on my own. I just don't want to re-invent the wheel, here. BTW it is a windows shop (they have an in house windows IT guy) so windows is preferred. I also have 10 GB of SQL Server 2000 databases to go across. Is the SQL server replication mode reliable? Thanks!

    Read the article

  • iPhone filesystem permissions POSIX-compliant?

    - by Seva Alekseyev
    Hi all, I'm trying to pass some files from one app to another. I communicate the path (via a custom URL). The target application cannot read the file, citing errno 13 (permission denied). I've checked the permissions on file - they're 0644 (O+R), the permissions on directories all the way up to the root are 755 (O+RX). From a POSIX perspective, the file should be readable to any process and any user. Yet it's not. Any ideas, please? I can think of some workarounds. I could use a Web service (upload, get a cookie, communicate the cookie to the other app, other app downloads). I could also pass the actual file data in the URL - unelegant, and probably subject to length limitations. Clipboard is not supported on iPhone OS 2 IIRC.

    Read the article

  • iPhone filesystem POSIX-compliant?

    - by Seva Alekseyev
    Hi all, I'm trying to pass some files from one app to another. I communicate the path (via a custom URL). The target application cannot read the file, citing errno 13 (permission denied). I've checked the permissions on file - they're 0644 (O+R), the permissions on directories all the way up to the root are 755 (O+RX). From a POSIX perspective, the file should be readable to any process and any user. Yet it's not. Any ideas, please? I can think of some workarounds. I could use a Web service (upload, get a cookie, communicate the cookie to the other app, other app downloads). I could also pass the actual file data in the URL - unelegant, and probably subject to length limitations. Clipboard is not supported on iPhone OS 2 IIRC.

    Read the article

  • Oracle Unbreakable Enterprise Kernel and Emulex HBA Eliminate Silent Data Corruption

    - by sergio.leunissen
    Yesterday, Emulex announced that it has added support for T10 Protection Information (T10-PI), formerly called T10-DIF, to a number of its HBAs. When used with Oracle's Unbreakable Enterprise Kernel, this will prevent silent data corruption and help ensure the integrity and regulatory compliance of user data as it is transferred from the application to the SAN From the press release: Traditionally, protecting the integrity of customers' data has been done with multiple discrete solutions, including Error Correcting Code (ECC) and Cyclic Redundancy Check (CRC), but there have been coverage gaps across the I/O path from the operating system to the storage. The implementation of the T10-PI standard via Emulex's BlockGuard feature, in conjunction with other industry player's implementations, ensures that data is validated as it moves through the data path, from the application, to the HBA, to storage, enabling seamless end-to-end integrity. Read the white paper and don't miss the live webcast on eliminating silent data corruption on December 16th!

    Read the article

  • Ask filesystem if it is mounted

    - by Brian
    How can I see if a (ext3) filesystem is mounted by asking the filesystem directly (i.e. the same way that the system does when it boots and sees that it was not unmounted cleanly)? Checking the output of mount is no good because the filesystem might be mounted by a virtual machine. I know I can run fsck and it will abort if the filesystem is mounted, but I don't need to actually check the filesystem.

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • How can I mount an AFS filesystem?

    - by Ben
    My current method is to mount the filesystem via SSH using Nautilus's graphical interface, but I would much prefer to be able to use some tool that mounts the AFS filesystem and gives me access to AFS-specific features (permissions, etc.). I've tried installing OpenAFS via apt-get, but so far the kernel module has refused to compile. Also, assuming I get OpenAFS installed, I'm not quite sure how to actually mount the remote filesystem to, say, /media/afs or some directory. I'm running Maverick with the 2.6.36-020636-generic kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/ Thanks for the help!

    Read the article

  • Graphical corruption during Linux startup and shutdown - should I be worried?

    - by Macha
    For the last month, when starting up or shutting down my laptop under Linux, I would get graphical corruption. Startup has an colour inverted, grainy rendition of what should be displayed while shutdown has a red background with all the text replaced by grey rectangles. At the very least this affects Fedora, Ubuntu and Xubuntu. Windows is not affected. Outside of startup/shutdown the system is fine. Should I be worrying about this?

    Read the article

  • Amazon Elastic MapReduce: Exception from FileSystem

    - by S.N
    Hi, I run my application using ruby client: ruby elastic-mapreduce -j j-20PEKMT9BRSUC --jar s3n://sakae55/lib/edu.cit.som.jar --main-class edu.cit.som.hadoop.SOMDriver --arg s3n://sakae55/repository/input/ecoli/ --arg s3n://sakae55/repository/output/ecoli/pl/ --arg s3n://sakae55/repository/data/ecoli/som.txt Then, I am seeing the following error: java.lang.IllegalArgumentException: This file system object (file:///) does not support access to the request path 'hdfs://i -10-195-207-230.ec2.internal:9000/mnt/var/lib/hadoop/tmp/mapred/system/job_201004221221_0017/job.jar' You possibly called Fi eSystem.get(conf) when you should of called FileSystem.get(uri, conf) to obtain a file system supporting your path. at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:52) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:416) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:259) at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:676) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:200) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1184) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1160) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1132) at org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:662) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:729) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026) at edu.cit.som.hadoop.SOMDriver.runIteration(SOMDriver.java:106) at edu.cit.som.hadoop.SOMDriver.train(SOMDriver.java:69) at edu.cit.som.hadoop.SOMDriver.run(SOMDriver.java:52) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at edu.cit.som.hadoop.SOMDriver.main(SOMDriver.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:155) at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) I am not sure why the error references to "file:///" even though all the arguments I pass do not use the schema.

    Read the article

  • Random memory corruption going undetected by memtest86

    - by sds
    Thinkpad t520; Ubuntu 12.04.1 LTS; 3.2.0-33-generic 16GB of ram Memtest86+ ran for 26 hours, 9 passes, no errors Booted into "recovery mode"; Ran fsck all filesystems - no errors; "check all packages" - no errors apparent random memory corruption: perl/R/chrome segfault every now and then, seemingly at random; sort(1) produces corrupt unsorted files. What could be possibly wrong and how do I debug it?

    Read the article

  • CVE-2012-0444 Memory corruption vulnerability in Ogg Vorbis

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-0444 Memory corruption vulnerability 10.0 libvorbis Solaris 11 11/11 SRU 8.5 Solaris 10 SPARC: 148006-01 X86: 148007-01 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Moving files with batch files from one pc to a server, to a another pc - worried about disk corruption

    - by AnchientAnt
    I use scheduled tasks that calls a batch file, that calls more batch files to move about three files from a pc, to a server, then to multiple other pcs. It all happens very quickly, as they are small files. Are there any pitfalls for how fast these transfers happen? I'm just mildly concerned about causing some disk corruption somehow. I use logic like 1. Call MapToPc if files exist then move file to folder on server. Disconnect 2. Call SendtoPCs If files exist (the files just moved to the server) then MapToPCs Move all files Disconnect All of this happens in about 2 secs or less. edit: this on windows 7, server 2003, xp respectively

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs)

    - by simpleton
    I downloaded 12.04.1 and md5 sum checked them and everything is good. Made a live usb and booted up... Just gets to where its about to start with the purple ubuntu loading screen then it goes back to text and gives this message: BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/loop0 on ///filesystem.squashfs failed: Input/output error Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs This happens in the live option, or persistent even the file checking one doesn't work Also I've tried a few different F6 options to no avail. I used 'LiLi USB creator' and 'unetbootin' and also 'Universal USB Installer,' all with the same results. I've also tried using a VM and it showed the same. That is when I figured I had a corrupt .iso so I downloaded it again checked the MD5: e235b63c02644e219b7bf3668f479c9e. Only I'm having the same problem. I'm just about ready to give up on 12.04.1 and just go back to 10.04 utill the next LTS comes out. I've got a dell mini 10 btw. Thanks for your time.

    Read the article

  • Screen Corruption in half the screen only

    - by Guy DAmico
    About 50% of the my NATTY desktop screen is corrupted. Once that happens I can re-boot as many times as I want but the problem continues. If I logout and then into WINDOWS for a day I may be successful and boot UBUNTU with a good screen. The desktop is formatted correctly, there's no pixelation, rather there is a fine grained white crosshatch pattern covering the entire screen. If I open any application the screen corruption worsens eventually to the point I can no longer make out anything. I ran ram memory test w/o any errors. I have no display issues when running WINDOWS 7. Any ideas. My computer is a Dual Boot stock DELL 5150 w/3gig of ram an on board video.

    Read the article

  • The DBA Team tackles data corruption

    Paul Randal joins the team in this instalment of the DBA Team saga. In this episode, Monte Bank is trying to cover up insider trading - using data corruption to eliminate the evidence, and a patsy DBA to take the blame. It's a great story with useful advice on how to perform thorough data recovery tasks. "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • How can I track down the cause of ext3 filesystem corruption?

    - by Jon Buys
    We have a VMware vSphere 5 environment running CentOS 5.8 virtual machines. In the past two weeks we have had five incidents of virtual machines having a filesytem become corrupt, requiring an fsck to repair. Here is what we see in the logs: Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392098: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:39:28 hostname kernel: Aborting journal on device dm-2. Nov 14 14:39:28 hostname kernel: __journal_remove_journal_head: freeing b_committed_data Nov 14 14:39:28 hostname last message repeated 4 times Nov 14 14:39:28 hostname kernel: ext3_abort called. Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Nov 14 14:39:28 hostname kernel: Remounting filesystem read-only Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392099: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:31:17 hostname ntpd[3041]: synchronized to 194.238.48.2, stratum 2 Nov 14 15:00:40 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2162743: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 15:13:17 hostname kernel: __journal_remove_journal_head: freeing b_committed_data The problem seems to happen while we are rsync'ing application data from another server. So far we have been unable to reproduce the problem, or identify a root cause. After we had a few servers have this problem, we assumed that there was an issue with the template, so we scrapped all VM's cloned off of the template, destroyed the template, and built a new template from scratch, installed from a newly downloaded CentOS ISO. We use HP EVA SAN's for datastores, and moved from a 4400 to a 6300 after the first problem. Since the move and rebuilding new virtual machines we have seen the issue twice. On one VM we shut down the server, removed two virtual CPUs, and booted it back up again, the problem presented itself almost immediately. On the other VM, we rebooted it, and the problem happened a half hour later. Any tips or pointers in the right direction would be appreciated.

    Read the article

  • Running resize2fs on /

    - by Paul Steckler
    I'm trying to resize an ext4 filesystem on a Fedora 11 box. Using fsdisk and lvm, I was able to grow the partition and logical volume containing the filesystem. When I try to run resize2fs on the device containing the filesystem (/dev/sda2 in this case), I get: "Device or resource busy while trying to open /dev/sda2, Couldn't find valid filesystem superblock" I've tried this from a rescue disk that doesn't have the filesystem mounted, no joy. Maybe resize2fs doesn't know about ext4?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >