Search Results

Search found 2515 results on 101 pages for 'distributed filesystems'.

Page 20/101 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Requiring mulitple group membership in order to access folder

    - by David
    How would I go about creating a file or folder that requires a user to be a member of two or more different groups in order to read/write to the folder? For example, say I run an auto repair shop, and I have a folder called "Repair History" and I only want people to access it if they are members of BOTH the "Mechanics" and "Cashiers" group? This would be an AND requirment instead of an OR requirement which seems to be the norm. I know we can create a separate group that is needed to access the folder, but this is more of an academic question, since it pertains to a different security structure that we are creating. I'm not sure if MS security handles it, but I'm wondering how it would be done either way.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Moving Exchange .EDB and .STM file to other partition

    - by Jorge Fernandez
    Im trying to move my exchange mailbox store to a new partition and i keep running into an error message saying: "cannot copy insufficient system resources exist to complete the requested service." The server is a Dell Poweredge 2850 with Dual Xeon Processors @ 3.00GHz and 4GB of ram. Running Win Server 2K3 R2 SP2 with Exchange 2K3 Standard. The Store is around 55GB any ideas. I want to get exchange on its on partition since I need to free up some space on the partition its currently on.

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • All of the NTFS hard links damaged, where are hardlinks stored and how to recover them?

    - by String Xu
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\windows\system32\PSHED.DLL C:\Windows\System32\hal.dll C:\Windows\System32\kdcom.dll C:\Windows\System32\clfs.sys C:\Windows\System32\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." Background: 1. Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • Linux web server shared hosting file errors

    - by dfilkovi
    I'm using a shared hosting to host my website and have some problems with files from time to time. First, one of my file (php) was missing a part of code (nothing to do with hackers just a random piece of code was missing), then after some time a value inside a mysql table was also missing a part, then a whole table column disappeared, after that a whole file on my site disappeared and lastly again some code from a file disappeared, my hosting service says it has nothing to do with them, but this is stupid, how can this happen, no hacker attack could do such a thing, I believe it's some kind of a disk corruption or bad backup. Anyone have any ideas?

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • File locked / read-only

    - by oshirowanen
    On a networked computer, I have a file which is coming up as read-only because someone else has it open. This is not true. This is a file stored locally on the computer and it is not being used by anyone else. I can login to the same computer using a different user, and the file opens up fine. I just get the issue with a particular user account. Other than deleting theses account/profile and creating it again, how can I unlock this file? Double clicking on this file gives me a message saying The file is locked for editing by another user, or the file (or the folder in which it is located,) is marked as read-only, or you specified that you wanted to open this file read-only. I don't think the folder is locked, because I can use other files in that folder fine, it's just 1 particular file which is causing this issue. I know that only 1 user is using this file as the file is on his c: and the same file works fine if he logs off to allow another user to log in.

    Read the article

  • Zabbix not getting data for one filesystem

    - by Dennis Williamson
    I have Zabbix monitoring disk space for several volumes on several servers. It works fine on all of them except for one of the volumes on one of the servers which always reports as 0. However, when I run ./zabbix_get -s localhost -p 10050 -k 'vfs.fs.size[/home, free]' locally on the machine in question, it gives me the correct, non-zero size which matches the output of df. How can I go about troubleshooting and correcting this problem?

    Read the article

  • Recursively apply ACL permissions on Mac OS X (Server)?

    - by mralexgray
    For years I've used the strong-armed-duo of these two suckers... sudo chmod +a "localadmin allow read,write,append,execute,\ delete,readattr,writeattr,readextattr,writeextattr,\ readsecurity,writesecurity,chown" sudo chmod +a "localadmin allow list,search,add_file,add_subdirectory,\ delete_child,readattr,writeattr,readextattr,\ writeextattr,readsecurity,writesecurity,chown" to, for what I figured was a recursive, and all-encompassing, whole-volume-go-ahead for each and every privilege available (for a user, localadmin). Nice when I, localadmin, want to "do something" without a lot of whining about permissions, etc. The beauty is, this method obviates the necessity to change ownership / group membership, or executable bit on anything. But is it recursive? I am beginning to think, it's not. If so, how do I do THAT? And how can one check something like this? Adding this single-user to the ACL doesn't show up in the Finder, so… Alright, cheers.

    Read the article

  • All of the NTFS hard links disappear, where are hardlinks stored on disk and how to recover them?

    - by Osiris
    This is Windows 7 x64 sp1 on a NTFS file system. All hardlinks within C:\Windows\System32 folder disappear, and the Windows can't boot, because even the osloader, C:\Windows\System32\boot\Winload.exe also disappeared. Nevertheless, the original files are still located in the corresponding C:\Windows\winsxs folders. After booting into the Recovery Environment, and copied one Winload.exe (x64) from other folder, Windows gave an error pointing out that "ntoskrnl.exe is corrupted or missing...its file digital signature cannot be verified" In trying to boot in Safe Mode, the message above was shown after a screen prompting "Loaded \Windows\system32\config\system" Because at this early booting stage, smss.exe was still not loaded, so there is not any dumping and logs. Based on my study, ntoskrnl.exe depends on the following files: C:\\windows\\system32\\PSHED.DLL C:\\Windows\\System32\\hal.dll C:\\Windows\\System32\\kdcom.dll C:\\Windows\\System32\\clfs.sys C:\\Windows\\System32\\ci.dll All those files above are copied from their corresponding folders and verified their md5 with a well-operating Windows 7 x64 SP1. But the booting error is still the same: "ntoskrnl.exe is corrupted or missing..." **Background:** Before the reboot, there was an windows update going on. Then something unknown happen, almost all processes were broken to run, including the windows task manager, taskmgr.exe. After mount the hard disk to other computer, it seems that all hardlinks within C:\Windows\System32 folder were gone. I tried several data recovery software, but they are not be able to find those disappeared NTFS hard links. So the question is: Where are information about those hard links stored? And how to recover them? Are they depend on some windows service or stored in the registry?

    Read the article

  • Log all files saved on XP system.

    - by Jason Taylor
    I have a user that frequently saves items (or even forgets to save) to places that he forgets. Usually a simple search finds them, but not always. Is there any way to log/track the most recently saved files? It would be great to be the last "saved" files as the recent documents feature is unreliable if he constantly opens documents in his search for the file he just saved. Alternatively, any ideas on how to control this situation?

    Read the article

  • What is the best vfat driver for FUSE?

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. One is 404 not found, others is old, not buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • Does moving a file outside NTFS loose data in alertnate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • How can I increase space on the Filesystem linux?

    - by xtrimsky
    I am renting a dedicated server with Parallel Plesk on it (which I hate and I try to use command line). I have a filesystem that is full,"df -H" prints this: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 4.0G 361k 100% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 4.3G 2.8G 1.6G 64% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 24M 1.1G 3% /tmp tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-local tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-queue tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/before-remote tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/info tmpfs 1.1G 0 1.1G 0% /usr/local/psa/handlers/spool The server I'm renting has 1TB of hard drive. Why are these so small, how can I increase my storage ? (I'm pretty beginner with Linux). Thank you

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

  • Copy a single file from main directory recursively across all directories within

    - by chris
    I'm on a dedicated server using CentOS, and on this server I have 5000+ directories in one main directory. In the main directory I have an index.php. I would like to copy this index.php into all 5000+ directories, but the only way I know how is doing it manually. Is there a way through the command line that I can enter something like cp and make it work from the directory? I'd copy it all the way down through all the directories and there sub directories within this main directory I am starting out in.

    Read the article

  • How to delete old pagefile.sys and hiberfile.sys on secondary disk (old windows install)

    - by Silvermist
    A while ago I swapped my main hard disk for a SSD. Now the old one is used as a secondary hard disk, and my OS is a fresh windows install on the main SSD disk. Nevertheless, there are still huge pagefile.sys and hiberfile.sys on that secondary hard drive. Those are not the ones used by the current windows, as those do exist on C:. I tried to attrib -s -h them, but it refused with "Access denied". Any idea how to delete those old unused system files and reclaim the space?

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • MS Windows issue - "Filename or extension is too long"

    - by Daniel
    I run Microsoft windows on a few of my machines. I don't know if many people know about this issue in the OS but you can't have very long filenames, from what I know Linux can have longer names, I have never run into this issue on my Linux machines. Anyway I run into issues whenever copying folders & files to backup drives. I manually backup of my data, finding and changing names of files, this is very very tedious. Is there a software tool to shorten folders or filenames that are found to be to long on Windows? I have drive image duplication software which does the job but in a way that I don't like, plus moving files can become a hassle at times if the names are too long to copy.

    Read the article

  • Repartition hard drive using Mac OS X, keep existing data

    - by Jonny
    I got a 1 TB disk a year or so ago and loaded it with some hundred of GB of data. I somehow neglected to check the file system, which turns out to be FAT-32 and thus too small for files bigger than 4 GB. So now I want to change it, without deleting the data. I thought I'd just make a new partition in the so far unused space. Then with the new partition, copy/move the data into the new partition, and then delete the old FAT-32 partition, and make the new partition bigger again... or just make a few more partitions. The critical step here is, can I make that new partition without ruining the data? The data should be fairly sequentially added to the start of the disk, but what do I know... so that's why I'm asking. Can I safely use Disk Utility for this? Any recommended file system?

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >