Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 44/92 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How exactly are Distributed File Systems used in cloud environment?

    - by vaab
    How exactly are Distributed File Systems used in cloud environment ? More precisely: Are live VMs images (or their filesystem) usually located in the DFS ? Are VMs usually used to run the backbone (actual code) of DFS structure ? Precise example citing DFS (ceph, Gluster, GFS, GPFS, Lustre) or cloud environment (Openstack , CloudStack, ...) would be appreciated, even if I'm more interessted by ceph on OpenStack for now.

    Read the article

  • How to access git:// protocol from GitPython

    - by Owais Lone
    I am writing an app to manage git repos using the GitPython module. It works fine for my local repos but I can't get it to work with the git:// protocol. It takes my git://address-to-repo as a directory on my filesystem. Is there a way to initiate a connection with a remote git repo?

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Gparted doesn't detect any partitions

    - by radi
    I am trying to install Ubuntu 10.04 LTS on my laptop. First I installed it inside Windows and when it boots for first time I got the error message "couldn't find root filesystem , try partition table to fix the problem " . When I want to install it on a single partition (normally) and when it told me to chose a partition I don't find any partition (entire disk) . I have 2 primary partitions and 3 logical partitions. How can I proceed with the install ?

    Read the article

  • Mounting /var /tmp /var/log to separate partition

    - by William MacDonald
    Per DISA hardening requirements for RHEL, I'm supposed to make sure a number of locations on the filesystem are mounted on separate partitions. A few of the locations they specify include /var /tpm /var/log etc. Is it possible to go about doing this on a live machine (without booting a separate OS)? And how would I go about doing that. I've backed up the OS so if I do screw something up I can recover. Thanks!

    Read the article

  • Mac OS X missing disk

    - by leo
    In boot camp, it only see partition size to be 149G while disk utility shows only one partition with size 320G. Why diskutil and df gives me different sizes? Also how can i fix it? thanks df -h Filesystem Size Used Avail Capacity Mounted on /dev/disk0s2 **149Gi 20Gi 129Gi 14% /** devfs 110Ki 110Ki 0Bi 100% /dev map -hosts 0Bi 0Bi 0Bi 100% /net map auto_home 0Bi 0Bi 0Bi 100% /home and diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme ***320.1 GB disk0** 1: EFI 209.7 MB disk0s1 2: Apple_HFS Mac HD 319.6 GB disk0s2

    Read the article

  • ZFS recordsize for VirtualBox and other virtual disks

    - by JOTN
    Has anyone run across any good benchmarks or other research on tuning the ZFS recordsize when putting virtual disk files on it for a guest OS? I'm using VirtualBox at the moment. I have notice significant performance improvement when working with a DBMS by setting the ZFS recordsize to the same as the DB blocksize, so I'm guessing matching the blocksize of the guest filesystem would also be a good idea.

    Read the article

  • How to do client side NFS failover in Linux?

    - by Doug
    I have a CentOS 6.3 client that needs to access NFS storage. There are two NFS servers that serve up the same content stored on a SAN with a clustered filesystem. How do I set up CentOS to failover to the backup NFS server if needed? When I Google, I keep reading that Linux does not support this, but that would be strange since there is plenty of information out there on how to set up a clustered Linux NFS server farm...

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Backing up oracle to TAPE

    - by andreas
    Hi folks, our Oracle database has grown very large as of late ~= 400 - 500 GB and saving to filesystem is not scalable anymore to us. We are looking at using RMAN to backup to tape (directly, not to fs then tape). Anyone can shed a light on this please?

    Read the article

  • How can I get an SFTP server running on Windows 2008?

    - by Saul
    I have a remote Windows 2008 machine and the task at hand is to share out parts of its filesystem via SFTP for a single user. Were commercial software an option things would be easy but I want freeware. After trying out several different candidates such as Core FTP Mini SFTP Server, SilverShield and freeFTPd none them really qualified - either connection issues, zero configurability or bugs. Is there a free and stable SFTP server for Windows 2008 which works out of the box?

    Read the article

  • Autoscaling EC2 with NFS mounts

    - by Jamie Taylor
    I'm trying to set up a shared filesystem on EC2 and I've read tutorials such as this: http://blog.ronaldmccollam.com/2012/07/configuring-nfs-on-ubuntu-in-amazon-ec2.html In step 2 it talks about configuring the exports, for this I need an IP range but when I'm auto-scaling I can't predict what the IP will be before it scales. Is there any other way of doing this while still staying secure? Thanks Edit: Just tried s3fs, didn't seem to work properly

    Read the article

  • How to start Cygwin's NFS server in read-write mode?

    - by Vi
    Installed Cyginw NFS server. It works. But I can't make it allow writing to the filesystem. Why does it fail? Server: $ cat /etc/exports #/ 10.99.98.2(rw,no_root_squash) /cygdrive/c/foranevia *(rw,no_squash_root,anon_uid=0,anon_gid=0,no_subtree_check) Client: root@vi-notebook:/mnt# mount wpc:/cygdrive/c/foranevia nfs root@vi-notebook:/mnt# mkdir nfs/qqq mkdir: cannot create directory `nfs/qqq': Read-only file system

    Read the article

  • Backup linux to ftp server

    - by Alakdae
    What do you use for backups to ftp server? I've tried the setup with Amanda and virtual tapes on the ftp server mounted with Curlftpfs and I'm not satisfied with it. I just don't feel confident about Amanda. Also I cannot use anything that uses rsync on the ftp mounted filesystem because it only creates the directories and doesn't create files as it cannot execute "mkstemp". I've been thinking about Bacula but I can't find any good HOWTO for it.

    Read the article

  • What is the best vfat driver for FUSE?

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. One is 404 not found, others is old, not buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • "No space left on device" with FreeBSD

    - by why
    When I login with root, and run "mkdir .ssh", the system says "No space left on device". But if I login with other user, it goes well. [/root]df -h Filesystem Size Used Avail Capacity Mounted on /dev/da0s1a 496M 411M 45M 90% / devfs 1.0K 1.0K 0B 100% /dev /dev/da0s1e 496M 12K 456M 0% /tmp /dev/da0s1f 57G 878M 51G 2% /usr /dev/da0s1d 4.3G 215M 3.8G 5% /var [/root]mkdir .ssh /: create/symlink failed, no inodes free mkdir: .ssh: No space left on device

    Read the article

  • Grub Setup(hd0) Error Cannot mount selected partition

    - by MA1
    I have created a NTFS Partition(/dev/sda3) and copy the grub files in it in the following path: /dev/sda3/boot/grub/ then tried to install the grub by using following commands: grub root (hd0,2) Filesystem unknown, partition type 0x7 grub setup (hd0) Error : cannot mount selected partition The partition is present and i created it with gparted. i also tried the following command: find (hd0,2)/boot/grub/stage1 Error 15: File not found All the files were there as copied them. So, where is the problem and what i am doing wrong?

    Read the article

  • Windows XP slow directory move

    - by maaartinus
    When I move a directory containing 900 MB in 4k files to another directory in the same filesystem, it takes nearly 1 minute and I hear the disk working. It's NTFS on Windows XP, the disk is quite fast (ST3100015 28AS) and works fine according to CrystalMark. I switched the antivirus off, and there's nothing else running (there's a lot of processes, but none doing any work). WTF is it doing instead of changing two directory entries?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • What's the correct SELinux type for a directory?

    - by unthar
    If I create a new filesystem/directory off of / and I set the Linux permissions to 770 I expect the group to be able to read and write files in that directory. SELinux was preventing me from doing this until I changed the SELinux type on that directory to public_content_rw_t. If this is just a directory in which users in that group will share files is this an acceptable SELinux type or should I be using another one? Writing a custom policy seems like overkill for these purposes. Thanks

    Read the article

  • What is the best vfat driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • How to clone a USB flash drive using dd?

    - by MentalBlister
    Using 'dd' to clone a USB drive -cfdisk: resized the destination partition to be of same size made the partition bootable same 'type' ext3 ran 'mkfs.ext3' after exit cfdisk then dd if=dev/sda1 of=/dev/sdb1 result booting: Missing operating system. The source USB device boots on multiple laptops USB destination filesystem looks the same.... Any idears?

    Read the article

  • Drive system file size

    - by rezx
    When i made a new drive it take some space for system file FAT32 take the less space, then NTFS, then ext4 my question how to know the space will be taken for the system before make the drive, if the drive 1giga or 100giga for FAT32, NTFS, ext4. Edit: when make 10MB drive with FAT32 the size shown 9.9 when make 10MB drive with ext4 the size shown 8.1 the same thing with the bigger size there always some space used and there is no files on the drive, so where this space go, if it for the filesystem how i can calculate the space that will be taken before format the drive

    Read the article

  • File size limit exceeded in bash

    - by yboren
    I have tried this shell script on a SUSE 10 server, kernel 2.6.16.60, ext3 filesystem the script has problem like this: cat file | awk '{print $1" "$2" "$3}' | sort -n > result the file's size is about 3.2G, and I get such error message: File size limit exceeded in this shell, ulimit -f is unlimited after I change script into this cat file | awk '{print $1" "$2" "$3}' >tmp sort -n tmp > result the problem is gone. I don't know why, can anyone help me with an explanation?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >