Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 19/101 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Converting NTFS to ZFS (or other)

    - by NumberFour
    Are there any benefits of converting HDDs that are running NTFS on a Linux machine to ZFS? Is there a way to do such conversion in Linux without losing the data? What about the stability of ZFS on Linux, does FUSE really work well in this case? People say that the only way to get the real full ZFS support is to install Solaris. I understand that the best choice for Linux would be ext4, but I really havent found a way how to convert to ext4 from NTFS without sacrificing all the data. On the other hand I have doubts whether changing from NTFS to ZFS while using Linux is really wise. Thanks for any tips.

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Windows 7 Sub-Folders hidden in "Program Files" directory

    - by ron tornambe
    I have Google searched for an hour now and I am confounded. I am using InnoSetup to install a .NET Winforms application that creates directories and folders on the fly. (I have set the folder options to display hidden files, folders...) Although the files that are added to "created" folders appear within the application, they do not show when using Windows Explorer or even when issuing a Dir from a command prompt. I have also modified the application to display (and delete) the contents of these (seemingly imaginary) folders, so I am sure they exist. What am I missing?

    Read the article

  • Folder default ACLs not inherited when new file is created

    - by Flavien
    I'm a bit of a beginner with Unix systems, but I'm running Cygwin on my Windows Server, and I am trying to figure out something related to extended ACLs. I have a directory to which I set the following ACLs: Administrator@MyServer ~ $ setfacl -m d:u:Someuser:r-- somedir Administrator@MyServer ~ $ getfacl somedir/ # file: somedir/ # owner: Administrator # group: None user::rwx group::r-x mask:rwx other:r-x default:user::rwx default:user:Someuser:r-- default:group::r-x default:mask:rwx default:other:r-x As you can see mose of the default ACLs have the x bit. Then when I create a fine in it, it doesn't inherit the ACLs it is supposed to: Administrator@MyServer ~ $ touch somedir/somefile Administrator@MyServer ~ $ getfacl somedir/somefile # file: somedir/somefile # owner: Administrator # group: None user::rw- user:Someuser:r-- group::r-- mask:rwx other:r-- It's basically missing the x bit everywhere. Any idea why?

    Read the article

  • SQL Server plus small files

    - by user1467163
    I have a MSSQL server, 3 volumes, that runs some processes that seem to take way too long. One of these processes reads in a zip file, then writes to a database based on what's in the zip file.... for each record. I have 2 volumes in use and am creating the third- so I am trying to plan how to do this. OS has to remain on vol. 1. The TLogs should probably go on the new volume and the mdf's on the existing vol.2.. Do I put the file store on the volume with the MDF's so they don't interfere with the TLog writes, or with the TLogs so they don't interfere with the TLog flush to the MDFs? I know it's best to have more servers / volumes but I have to make do with whats on hand for now. I appreciate any suggestions.

    Read the article

  • Linux: don't use file system cache under a directory

    - by GetFree
    For a PHP website I'm monitoring, I need to see what files are being used each time the browser makes a request. I thought of using find . -type f -amin 1. With that I get all files which were read in the last minute (it's a developing server so only I am using the website). I took care of removing the noatime attribute from the mounting point. However there must be something else that's preventing the kernel from reading the actual files on disk because the access time is not being updated when I read a file. I guess it must be the file-system cache which is retrieving the files from memory. Is there a way to disable file caching under a specific directory? (public_html in my case) Also I read somewhere that there is the nobh mounting atributes which apparently disables file caching under that mounting point, but I'm not sure.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • How to create a filesystem mountable by windows in linux?

    - by wcoenen
    I have attached an external USB disk to my debian gnu/linux system. The disk showed up as device /dev/sdc, and I prepared it like this: created a single partition with fdisk /dev/sdc (and some more commands in the interactive session that follows) formatted the partition with mkfs.msdos /dev/sdc1 If I then attach the USB disk to a Windows XP or Vista system, then no new drive becomes available. The disk and its partition show up fine in the disk managment tool under "computer management", but apparently the file system in the partition is not recognized. How do I create a FAT32 file system which can actually be used in windows? edit: I've given up on this and went with a NTFS file system created by windows. In debian lenny this can be mounted read-write but apparently it requires you to install the "ntfs-3g" package and explicitly pass the -t ntfs-3g option to the mount command.

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • Which file system to choose from when formatting 1.5TB hard drive (hdd)

    - by MaxiWheat
    I plan to buy a 1.5TB hard drive soon. I would like to know which file system to choose from when I'm gonna format it. With FAT32, there is a limitation concerning the maximum file size (4GB) that bugs me since I might save large files such as DVD images which are over 4GB. On the other hand, NTFS allows me to save larger files, but seems less compatible with other OS than Windows and is also proprietary to Microsoft. Are there other alternatives ? Can you give me your advices ?

    Read the article

  • Optimal dir strcuture for keeping millions of files on an ext4 system

    - by Alex Flo
    I need to keep millions of files on an ext4 system. I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs. For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s. When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.

    Read the article

  • Repairing a corrupt exFat file system

    - by Wandyer
    Long story short : I messed up my GPT and went on to try to fix it without asking anyone, just searching around. Didn't turn out too well. Right now all I'm concerned about is a 500GB that I formatted as exFat partition with some important files. But on my journey to fix, I may have used the 'fdisk' command on a GParted Live CD I have (couldn't get on any OS) and switched it to ext2. Now I can't get access to it, doesn't show up on Windows or Mac. Only on the partition table as ext2. I have got access to most of my files through recovery softwares but they cannot recover with the originial directory or file names, which would be a pain to fix. I want to know if there is a way to change back the file system to exFat without having to format it. Thanks in advance. EDIT: This is how my partitions look like right.

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • Can two Linux installations share the same /home partition?

    - by huahsin68
    I am currently using OpenSuse 11.4 and Windows XP in laptop. I was planning to remove the Windows and switch to install Kubuntu. My current situation is that I have my root (/) and /home partition separated in OpenSuse. Can I share the /home partition between OpenSuse and Kubuntu? How do I configure Kubuntu to use the existing /home partition during the installation? BTW, the most recent Kubuntu is using ext3 file system whereas my OpenSuse is using ext3. Will this a matter for me to install Kubuntu? Any other issue I need to take care of?

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • How to protect files/folders from being copied/moved/deleted/cut on windows

    - by Sean Lee
    I need to share data on an external drive that will be handed over to someone else, and I would like to achieve the following: (1) protect all the files and folders from being copied/moved/deleted/cut on windows system (2) files are browsable and media playable, but it stays inside the drive (3) the same behavior if drive is plugged on linux system, or not accessible at all is fine too. How can I do these without using paid software?

    Read the article

  • Files being rolled back on server 2008 R2

    - by Gary
    I've got a weird situation occurring on my dev server. Randomly, and for no reason that I can see, files are being rolled back to an earlier version! This has happened twice now - the first time I assumed I'd done something wrong somewhere, restored the file I was after from a backup and gave it no further thought. The second time, just now, it happened to a folder containing just a few files that I was working on - suddenly all the changes I'd made over the last day or two were gone! (I know, commit more often, ay?). Thankfully I have a daily backup and so have recovered my files, but I'm very concerned about this and need to understand how and why it's happened. The only change made between file states is that I enabled sharing on a completely unrelated folder. I'm developing an app on Railo, which is running on Tomcat. The code was all fine and in c:\websites\appname. I shared the 'Railo' folder, which is c:\railo in order to allow my IDE access to the logs generated by the app (contained in c:\railo\tomcat\logs) and when I reloaded the app, the code was reverted to a few days ago! I'm at a complete loss here. Can anyone point me in the right direction? Thanks.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Transparently cache files from a network drive in Linux

    - by Vadim
    We have a Linux server that reads files from a network drive and processes them. In a common scenario, a user will log in and access the same files over and over again. The size of the files varies but the larger ones can be around 50+ Mb. The files seldom change. I was wondering if it's somehow possible to transparently cache the files. I don't want (or can) change the program the reads the files, nor do I control the protocol by which the files are accessed. I just want something to detect that I access a certain path, copy the file locally (if needed) and then read the file from the local drive. I've read about Bcache but can't figure out if it's what I need. Do you have any suggestions? Thanks, Vadim.

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >