Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 30/35 | < Previous Page | 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Custom kernel with NFS client support

    - by Vaibhav
    I'm trying to build a custom Linux kernel using this link I have successfully built the kernel and booted into it. Now I want to mount NFS share on it. I have enabled NFS client support from menuconfig . Update : I'm trying to mount a NFS share from newly built kernel. I have tried adding a NFS client support to the kernel. Following command shows (From newly built kernel) #cat /proc/filesystems nodev nfs nodev usbfs ext3 vfat .... Which shows that kernel support NFS filesystem but, mount command fails to mount NFS share, which is working fine on other machines. Help will be appreciated.

    Read the article

  • In *nix, how to determine which filesystem a particular file is on?

    - by smokris
    In a generic, modern unix environment (say, GNU/Linux, GNU/Solaris, or Mac OS X), is there a good way to determine which mountpoint and filesystem-type a particular absolute file path is on? I suppose I could execute the mount command and manually parse the output of that and string-compare it with my file path, but before I do that I'm wondering if there's a more elegant way. I'm developing a BASH script that makes use of extended attributes, and want to make it Do The Right Thing (to the small extent that it is possible) for a variety of filesystems and host environments.

    Read the article

  • Solaris to Linux conversion: Use VxFS or GFS?

    - by w00t
    We're a Solaris shop looking at RedHat Enterprise Linux and one of the things we're wondering is if we should keep Veritas Volume Manager + FileSystem or go with LVM+ext3 or RedHat's preferred cluster filesystem solution, GFS. One of the things we like about Veritas is that it can use Veritas Volume Replicator to have a remote copy of important filesystems. This functionality seems to be missing from RedHat, DRBD doesn't seem to be packaged in RHEL... So my questions are: Does anybody use VxFS/VxVM/VVR on Linux? Thoughts, experiences? Comparison with LVM+ext3? Anybody using GFS? Thoughts, experiences? Do you do remote replication for disaster recovery, and if so, how? Is there a standard RedHat way?

    Read the article

  • Boot time virus scan from USB drive

    - by Tomas Sedovic
    I want to check for viruses on a computer that I suspect may be infected with malware. Its users are running an antivirus, but there's always the risk that something slips past and the way I see it, once the system is infected the antivirus is useless because the malware can hide itself from the AV. I think the best way to go (besides clean reinstall of the OS) would be to have an antivirus running at a boot time from a CD or a USB key. That way, the malware is just lying on the disk and cannot do any of its hide-and-seek stuff (provided the AV comes from an uninfected PC and all that). So, I'm looking for something that: Runs at boot time (off USB key or CD-ROM) Does not touch or require the local OS Discovers malware fairly well (like, Avast, AVG, Norton, whatever -- I think the're all the same anyway) Can handle Windows filesystems (FAT 32, NTFS, WinFS ;-) ) Comes from some sort of trusted source (no Windows Antivirus 2009) I know that this is no silver bullet (nothing is, really*), but I do have a feeling it's more likely to help than doing the scan* within the infected system.

    Read the article

  • distributed, fault-tolerant network block device

    - by gucki
    I'm looking for a distributed, fault-tolerant network storage system which exposes block devices (not filesystems) on the clients. A client's block device should write simultaneously to several storage nodes A client's block device should not fail as long as not all storage nodes backing it went down The master should automatically redistribute storages' data when a storage node fails or gets added/ removed A single master (which is for metadata only) is fine So ideally the architecture would be very similar to moosefs (http://www.moosefs.org/) but instead of exposing a real filesystem mounted using a fuse client it'd expose block devices on the clients. I know of iscsi and drbd but both don't seem to offer what I'm looking for. Or am I missing something?

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZSF version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Live resize of a GPT partition on Linux

    - by cyberz
    On Linux I used to resize MBR partitions using fdisk, even on live filesystems, and then issue a resize2fs/pvresize/... (depending on fs type) to get the new space allocated. Lately I've been using Xen and GPT partitions, and I've noticed that unfortunately parted doesn't seem to allow on-the-fly resizing of a mounted partition, in fact it will complain: Error: Partition XXX is being used. You must unmount it before you modify it with Parted. I've tried both the resize command and even rm + mkpart combination, but they will both complain about the partition being mounted. How can I do that?

    Read the article

  • How to configure autofs5 timeout on per-filesystem basis?

    - by Norman Ramsey
    Because of a show-stopping bug in Debian autofs 4, I just upgraded to autofs5. It is not honoring the timeout option in my auto.master file: /var/autofs/removable /etc/auto.removable --timeout=2 I use this map for thumb drives and so on; I don't want a general default timeout of 2 seconds. I did some digging and although the --timeout option worked in autofs 4, and it appears in some examples on the Web, it is not actually sanctioned (or even mentioned) in the documentation for the auto.master file. So I don't feel I can report the problem as a bug. How can I get autofs5 to timeout after 2 seconds only on designated filesystems? Update: I am using a Debian-packaged autofs5, version 5.0.4-3.2.

    Read the article

  • How can I view updatedb database content, and then exclude certain files/paths?

    - by rubo77
    The updatedb database on my debian server is quite slow. where is the database located and how can I view its content and find out if there are some paths with useless stuff, that I could add to the prunepaths? my /etc/updatedb.conf looks like this: ... # filesystems which are pruned from updatedb database PRUNEFS="NFS nfs nfs4 afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf" export PRUNEFS # paths which are pruned from updatedb database PRUNEPATHS="/tmp /usr/tmp /var/tmp /afs /amd /alex /var/spool /sfs /media /var/backups/rsnapshot /var/mod_pagespeed/" ... and how can I prune all paths that contain */.git/* and */.svn/* ?

    Read the article

  • UDF filesystem -> Maximum number of files

    - by user978122
    I am considering partitioning a rather large hard drive with the UDF filesystem for an experiment, and would like to ask if anyone knows the maximum number of files, either by directory, or as a whole, that the UDF filesystem can handle? For some background, I looked at the JFS and XFS filesystems (NTFS has a limitation of the number of files per volume); however, since I run Windows, that's kind of out. UFD, on the other hand, does not appear to have these limitations, but then, I cannot really find any information on just how many files per volume the UDF file system supports.

    Read the article

  • Proper upstart script for hamachi?

    - by ALQ
    I've been looking for a script to supervise hamachi and mostly got it to work except for the part that daemonizes hamachid. The following script works but is not perfect. I'm not familiar with upstart internals to debug this further. description "Hamachi VPN" author "Alexis Le-Quoc <[email protected]>" start on (net-device-up and local-filesystems and runlevel [2345]) stop on runlevel [016] respawn oom never env DAEMON=/opt/logmein-hamachi/bin/hamachid pre-start script [ -x "$DAEMON" ] end script # should really be: # expect daemon # exec $DAEMON exec $DAEMON debug > /dev/null

    Read the article

  • Why does cpio say "WARNING! These file names were not selected" when copying a large number of files

    - by mmm bacon
    For over 10 years, I've been using this strategy to copy a large number of files between UNIX filesystems: cd source_directory find . -depth -print | cpio -pdm /path/to/destination_directory It works like a champ. However, I'm now getting this error from cpio: cpio: WARNING! These file names were not selected: (long list of files here...) The source directory is on OSX 10.5, and the destination directory is a NFS filesystem from an OpenSolaris server. Copying over NFS has never been a problem in the past. There's nothing strange about the filenames, meaning there aren't special characters or anything like that. Any ideas?

    Read the article

  • Can a power failure or forceful shutdown damage hardware?

    - by Vilx-
    Can computer hardware suffer damage from forceful shutdowns (holding the power button for five (5) seconds) or power failures? I believe that normal PC hardware does not suffer from this - after all, it's not much different than what they experience under a standard shutdown. But elsewhere I've read tht another person thought that it could do physical harm to the hard drive and possibly other components as well. He also said that the journaling features of filesystems are useless in face of power failures and were intended to help mitigate damage from system crashes. I think this is nonsense, but then again I lack the experience and knowledge to say it with certainty.

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Macbook Pro 2.2ghz 2011 (OSX 10.6.7) problem with NTFS 3G

    - by James
    I installed NTFS 3G but now get the following error message when I try to plug in my external drive. I also get it on startup about my Windows partition. Uninstall/ reinstall does not work. NTFS-3G could not mount /dev/disk1s1at /volumes/freeagent GoFlex Drive because the following error occured: /library/filesystems/fuse.fs/support/fusefs.kext failed to load- (libkern/kext) link error; check the system/ kernel logs for errors or try kextutil(8). The MacFUSE file system is not available (71) Any help would be great. I'd hope to avoid reinstalling OS X if possible!

    Read the article

  • changing filesystem format from jfx to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux.

    Read the article

  • Windows 7 installer doesn't recognize NTFS partition.

    - by ifesdjeen
    Hi, I'm trying to install windows 7 on my Macbook. I've created NTFS partition, but when i'm starting up Windows 7 installation, it says that i can't install windows on this partition, since drive already contains maximum amount of partitions with this filesystem type. I haven't heard of any limits on filesystems, but still i can't even format this drive from Win7 installer. I've found access to command line from win7 installation CD, but i can't find fdisk there to format. Do you have any idea on about how to deal with it?

    Read the article

  • Windows 7 installer doesn't recognize NTFS partition.

    - by ifesdjeen
    Hi, I'm trying to install windows 7 on my Macbook. I've created NTFS partition, but when i'm starting up Windows 7 installation, it says that i can't install windows on this partition, since drive already contains maximum amount of partitions with this filesystem type. I haven't heard of any limits on filesystems, but still i can't even format this drive from Win7 installer. I've found access to command line from win7 installation CD, but i can't find fdisk there to format. Do you have any idea on about how to deal with it?

    Read the article

  • What is OpenSVC?

    - by sh-beta
    OpenSVC was just ported to the FreeBSD platform. The little blurb in that announcement intrigued me so I went to the OpenSVC website and found this: OpenSVC is a 'service' manager, as in clustered service manager, designed for real-world heterogeneous datacenters and large-scale operations orchestrator (disaster recovery, for example). Services are collections of resources (virtual machine, ip, disk groups, filesystems, file synchronizations, and application launchers). Services can be started, stopped and queried for status, providing a consistent command set for wildly different service integration types. Service configurations, status and logs are pushed to a central database coupled to a web front-end (collector). Services can be administered using the stand-alone GPLv2 software stack deployed on the nodes (nodeware), or through the web-front end. Plus some UML-type graphics. Which is all neat, but I still don't understand: what does it do? Am I just being dense? What's the use case for this system?

    Read the article

  • Turning a running Linux system into a KVM instance on another machine

    - by Charles
    I have two physical machines that I wish to virtualize. I can not (physically) plug the hard drives from either machine into the new machine that will act as their VM host, so I think that copying the entire structure of the system over using dd is out of the question. How can I best go about migrating these machines from their hardware to the KVM environment? I've set up empty, unformatted LVM logical volumes to host their filesystems, with the understanding that giving the VMs a real partition to work with achieves higher performance than sticking an image on the filesystem. Would I be better off creating new OS installs and rsyncing the differences over? FWIW, the two machines to be VM'd are running CentOS 5, and the host machine is running Ubuntu Server 10.04 for no particularly important reason. I doubt this matters too much, as it's still going to be KVM and libvert that matter.

    Read the article

  • Pushing image changes to multiple servers

    - by gms8994
    I need the ability to push images out to multiple servers whenever they're updated. I've looked at Network Filesystems, but they're all but worthless due to their speed. Images can be uploaded to any one of 3 servers, and would then need to be copied to the other 2. Any suggestions? I'm open to try just about anything. EDIT: Graphics data (jpg, gif, png, etc). Linux only. We're currently using rsync. But having it work back and forth is getting cumbersome. It's all local network.

    Read the article

  • Command-line tool to search for file names on offline backup drives

    - by halloleo
    I am looking for an open-source (command-line) tool to register and search all my (backup) drives on a file name level. I want to search for file and folder names preferably written as regular expressions or file glob patterns. The external drives contain just normal HFS and NTFS filesystems. The backups are done via direct file copy. Requirement is that the tool compiles on OS X and works without each of the drives attached, but rather pointing me to the drive in case a drive contains a file with the pattern I searched for. At the moment I use a hand-knit script solution with locate databases, one for each external backup drive, but this is rather cumbersome, because locate itself can accesses only one database at a time and does not contain any management system for all the indices/databases. Are there any other tools out there for this?

    Read the article

  • Ubuntu upstart hangs on interactive start & stop

    - by danorton
    How do I get Ubuntu upstart to not hang on interactive start & stop? I have created many upstart scripts that work fine during init, but often hang when I enter them at the console. If I CTRL+C out, all that happens is that the job changes state. The script is never run. I’m running Ubuntu Lucid on a Xen virtual server with a Linux 2.6.39 kernel. Below is merely a representative example of many scripts that behave this way: description "apache2" start on local-filesystems \ and (net-device-up IFACE=lo) \ and (runlevel [2345]) stop on runlevel [016] respawn respawn limit 10 5 expect daemon script . /etc/apache2/envvars /usr/sbin/apache2ctl start end script

    Read the article

  • Ubuntu 12.04 on Amazon EC2: /dev/xvda1 will be checked for errors at next reboot?

    - by cwd
    I'm running the lastest Ubuntu 12.04 AMI (ami-a29943cb) from Canonical on Amazon EC2 and quite often when I log in I get the message: *** /dev/xvda1 will be checked for errors at next reboot *** I have read a bunch of documentation on this and seem to understand that every so many reboots (around 37 see Mount count / Maximum mount count below) Ubuntu wants to check a disk for errors. I can see that by using dumpe2fs -h /dev/xvda1 (reference) to get information such as: Last mounted on: / Filesystem UUID: 1ad27d06-4ecf-493d-bb19-4710c3caf924 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 524288 Block count: 2097152 Reserved block count: 104857 Free blocks: 1778055 Free inodes: 482659 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 511 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Apr 24 03:07:48 2012 Last mount time: Thu Nov 8 03:17:58 2012 Last write time: Tue Apr 24 03:08:52 2012 Mount count: 3 Maximum mount count: 37 Last checked: Tue Apr 24 03:07:48 2012 Check interval: 15552000 (6 months) Next check after: Sun Oct 21 03:07:48 2012 Lifetime writes: 2454 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 0a25e04c-6169-4d68-bfa6-a1acd8e39632 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x0000158b Journal start: 1 I've tried these things to get rid of the message and usually the badblocks is what does it for me: Run this command and reboot: sudo touch /forcefsck Run badblocks to check the disk: badblocks /dev/sda1 Edit /etc/fstab and change the last "0" which is the fs_passno column accordingly and then reboot: The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. I don't understand: If this is a virtual drive shouldn't it be less prone to errors? Was the image created with one of the flags set? If not what is triggering it? Why is fs_passno set to 0 on Amazon EC2 Ubuntu images? This is not the first one that is like this.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35  | Next Page >