Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 15/35 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • How to delete old pagefile.sys and hiberfile.sys on secondary disk (old windows install)

    - by Silvermist
    A while ago I swapped my main hard disk for a SSD. Now the old one is used as a secondary hard disk, and my OS is a fresh windows install on the main SSD disk. Nevertheless, there are still huge pagefile.sys and hiberfile.sys on that secondary hard drive. Those are not the ones used by the current windows, as those do exist on C:. I tried to attrib -s -h them, but it refused with "Access denied". Any idea how to delete those old unused system files and reclaim the space?

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • What are 'damaged files' on external hard drive (HFS format for OS X)?

    - by dtlussier
    I have an external HD formatted to default HFS (Mac OS Extended - Journaled) and very once and a while I get a folder called DamagedFiles in the root of the volume. The folder contains a collection of links to files on the drive. In general the files seem fine as I am for example able to open the images or text files without a problem. Is this serious? What can I do to fix this problem? Any advice would be great as I couldn't find anything on here or via Google that addressed this problem in particular. Many thanks.

    Read the article

  • Java Development in Linux

    - by Zac
    I'm a developer and am brand new to Linux (Ubuntu): I'm wondering what the "best practices dictate" for what FHS directories to install various tools to. Things I'll be installing: Eclipse & plugins GlassFish SVN ...etc. I see that /opt is for holding additional ("optional") software packages, but also see /usr as a place for utils and apps. In another post a user recommended I create an entire partition for /srv alone, and to do my staging there (I assume he meant that /srv is where GlassFish and other servers should go?). So basically: what FHS directories do Linux developers use for which type of tools? Thanks for any input here

    Read the article

  • DVD RW: Are they still relevant for backups?

    - by Harry
    Hello, With the availability of compact USB memory sticks with much, MUCH higher storage capacities is there still any use-case for taking periodic, incremental backups on DVD/RWs? The DVD/RW has an additional annoyance that you cannot drag and drop files to it as easily as you can on a USB memory stick. So, if I have a 4.7GB DVD/RW, I must re-burn the whole image every time I backup new stuff... with possibly rearranged file/folder structure. Secondly, why in this day and age you cannot install a file-system (like ext3 or FAT32) on a DVD/RW... and likewise on CD/RW's as you can on a USB memory stick? Many thanks, /HS

    Read the article

  • DVD RW: Are they still relevant for backups?

    - by Harry
    Hello, With the availability of compact USB memory sticks with much, MUCH higher storage capacities is there still any use-case for taking periodic, incremental backups on DVD/RWs? The DVD/RW has an additional annoyance that you cannot drag and drop files to it as easily as you can on a USB memory stick. So, if I have a 4.7GB DVD/RW, I must re-burn the whole image every time I backup new stuff... with possibly rearranged file/folder structure. Secondly, why in this day and age you cannot install a file-system (like ext3 or FAT32) on a DVD/RW... and likewise on CD/RW's as you can on a USB memory stick? Many thanks, /HS

    Read the article

  • Windows XP slow directory move

    - by maaartinus
    When I move a directory containing 900 MB in 4k files to another directory in the same filesystem, it takes nearly 1 minute and I hear the disk working. It's NTFS on Windows XP, the disk is quite fast (ST3100015 28AS) and works fine according to CrystalMark. I switched the antivirus off, and there's nothing else running (there's a lot of processes, but none doing any work). WTF is it doing instead of changing two directory entries?

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • How to remove bad disk from LVM2 with the less data loss on other PVs?

    - by Walkman
    I had a LVM2 volume with two disks. The larger disk became corrupt, so I cant pvmove. What is the best way to remove it from the group to save the most data from the other disk? Here is my pvdisplay output: Couldn't find device with uuid WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3. --- Physical volume --- PV Name unknown device VG Name media PV Size 1,82 TiB / not usable 1,05 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 476932 Free PE 0 Allocated PE 476932 PV UUID WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3 --- Physical volume --- PV Name /dev/sdb1 VG Name media PV Size 931,51 GiB / not usable 3,19 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID oUhOcR-uYjc-rNTv-LNBm-Z9VY-TJJ5-SYezce So I want to remove the unknown device (not present in the system). Is it possible to do this without a new disk ? The filesystem is ext4.

    Read the article

  • Is ext4 more expensive than ntfs?

    - by ???
    I have just converted an NTFS partition to ext4, however the total space seems reduced from 421G to 415G. Where did the 6G go? And, the reserved space is grown to 199M in ext4, much larger compared to 78M in NTFS, why? The partition is mainly used for movies/musics, so most files are very large (10M each). I want to use ext4 file system, is there any suggestion? mkfs.ntfs: /dev/sdb4 421G 78M 421G 1% /mnt/mmedia mkfs.ext4: /dev/sdb4 415G 199M 393G 1% /mnt/mmedia It's also weired that the remaining size of ext4 is 393G, shouldn't it be 415G or 414G? What happened to the disappeared 22G? Compared to NTFS, ext4 seems eaten 28G in total.

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted file system for Linux that can be mounted in a write-only mode, by that I mean you should be able to write/append files, but not be able to read the files you have written. Access to the files should only be given when the filesystem is mounted via a password. The purpose of this is to write log files and such, without having the log files themselves be accessible. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't get easy access to the file system as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • MS Windows issue - "Filename or extension is too long"

    - by Daniel
    I run Microsoft windows on a few of my machines. I don't know if many people know about this issue in the OS but you can't have very long filenames, from what I know Linux can have longer names, I have never run into this issue on my Linux machines. Anyway I run into issues whenever copying folders & files to backup drives. I manually backup of my data, finding and changing names of files, this is very very tedious. Is there a software tool to shorten folders or filenames that are found to be to long on Windows? I have drive image duplication software which does the job but in a way that I don't like, plus moving files can become a hassle at times if the names are too long to copy.

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • Why can I not access any file or directory created by PHP from FTP-client?

    - by user43053
    Hello there, If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. Thank you for your time. Kind regards Marius

    Read the article

  • Proving file creation dates

    - by Nils Munch
    In a weird case surrounding copyrights of a software system I have developed, I use the fact that I have all the source files of the system in question, created long before I joined the company that claims to own the system. The company being sued by yours truely says that I have simply manipulated to files to appear to be from that date. Is it even possible to fake or manipulate creation dates ? And if so, how can I "prove" that the files really are that old ? Luckily, I stored my project on GitHub, whick confirmed the fact that the files are from that era, but that is besides the point. I run purely Apple OS X.

    Read the article

  • Should I use VFAT or ext3 for a 1Tb external usb hard drive?

    - by ihuston
    I have a 1 Tb USB external hard drive which I want to use to backup data from my home and office desktops (both running Linux). Should I format the drive (possibly split into a few partitions) as vfat or ext3? I don't anticipate using the drive with Windows very often so this is not a primary concern. The main thing holding me back from just using ext3 is the problems you can have when two different users (home and work accounts) try to access each others data. Is there any way to mount an ext3 drive with user id mapping?

    Read the article

  • Is there good FAT driver for FUSE? (Lightweight, not mountlo)

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. Both are old, FatFuse is read-only , FuseFat is non-buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself)), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • Cross-platform file system

    - by Console
    I would like my external drives to be readable and writable from Linux, Mac OS X and Windows. FAT32 works, but the 4 GB file size limit is a showstopper these days. Are there any alternatives?

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

  • Copy a single file from main directory recursively across all directories within

    - by chris
    I'm on a dedicated server using CentOS, and on this server I have 5000+ directories in one main directory. In the main directory I have an index.php. I would like to copy this index.php into all 5000+ directories, but the only way I know how is doing it manually. Is there a way through the command line that I can enter something like cp and make it work from the directory? I'd copy it all the way down through all the directories and there sub directories within this main directory I am starting out in.

    Read the article

  • Speed-up large number of files deletion on NTFS volumes

    - by sharptooth
    Every now and then I need to delete a folder containing something like 500k files from an NTFS volume. I do this with Windows Explorer. Since NTFS journals all the service data changes each deletion is carried out serially and so the whole 500k files deletion takes ages. I remember when I did the same in FAT32 it ran uncomparably faster. Is there any way to speed up deletion of large number of files on NTFS volumes?

    Read the article

  • Users will be kicked out of a network drive (DFS)

    - by user71563
    Hi, In early January 2011, we completely switched to Windows Server 2008 R2 and Windows 7. On our domain controller set up a DFS is that the users as "Z: drive" is displayed. The DFS was it in the same way during our time with Windows Server 2003 R2 and Windows XP. At the time it has always worked without problems. Since Windows 7, we have sometimes the case that when a user accesses to the Z drive, the Explorer will return to the workplace without a user can do. After two to three trials of the Explorer remains in the network drive and the users work. This phenomenon occurs irregularly and you can not restrict exactly why. In the event log at the time no obvious entries are logged. Does anyone know the problem or has had similar experiences? I am grateful for any help. Greetings, sY!v3Rs

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >