Search Results

Search found 70915 results on 2837 pages for 'file permissions'.

Page 48/2837 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • A network share folder is invisible to users

    - by Myrddin Emrys
    I have a network share folder that I was recently cleaning up permissions to. I took off the four individual names from the access permissions to the folder, and added a new security group (Universal) with standard Read/Write permissions to that folder, then added those 4 people to the group. However... now nobody can see the folder. The users can see the other 9 folders in that shared drive, but the 10th is missing. I cannot see any security permission in the parent folder or in the folder itself which would cause it to be invisible to anyone, regardless of whether they have permission to open it or edit files within.

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • fstab and cifs mounting, possible to store authentication information outside of fstab?

    - by tj111
    I am currently using cifs to mount some network shares (that require authentication) in /etc/fstab. It works excellently, but I would like to move the authentication details (username/pass) outside of fstab and be able to chmod it 600 (as fstab can have issues if I were to change its permissions). I was wondering if it is possible to do this (many-user system, don't want these permissions to be viewable by all users). from: //server/foo/bar /mnt/bar cifs username=user,password=pass,r 0 0 to: //server/foo/bar /mnt/bar cifs <link to permissions>,r 0 0 (or something analogous to this). Thanks.

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • Monitor or log directory permission changes?

    - by Myles
    I'm having an issue with a cPanel shared server running CentOS 5 where a few directories under the public_html folder keep getting changed to 777 from 755. The customer says they are not changing it and i'm wondering if there is a way to monitor these specific directories to find out who/what is changing the permissions. I have looked into using auditctl and after testing it and changing the permissions myself I don't see anything in the logs so i'm not sure if i'm doing it right or if it's even possible. Does anybody have any suggestions or ideas on how I could figure out what is changing the permissions? Thanks!!

    Read the article

  • Rsync : execute permission required

    - by user651488
    I'm using rsync between two servers to transfer files. The problem is some files are not transferred. I get this error : rsync: readlink "/var/www/index.html" failed: Permission denied (13) So I check permissions on the server and after make tests, I notice a file is transferred only if it has these permissions : R-W ! If the file have these permissions : R--, Rsync can't download it !? Command: /usr/bin/rsync -avzr -e "/usr/bin/ssh -i /home/replication/thishost-rsync-key" [email protected]:/var/www/index.html ./ Is it a bug with Rsync ? I find any information about this problem. Thanks for your help Debian Etch 2.6.30 Rsync 2.6.9 protocol version 29

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by RD.
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • Prevent the "System" process from locking my files in a shared folder.

    - by Kamarey
    I have an application that creates files to be processed by SQL bulk. The files are created in shared folder on another server and than taken from there by SQL. The problem that sometime SQL returns an error, that the file is locked by another process and can't be accessed. The process that locks these files is "System" process. Looks like it lock files because of they are in a shared folder, but not sure. The use of any software to unlock files manually is not an option, as all bulk process is automatic. The question is: Why the "System" process locks these files and is there a way to prevent this?

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • How to extract hhp file from a chm file

    - by Sam
    Hi, I have an A.chm file for my windows application which runs as expected. When I decompile it using HTML workshop I get set of html files, .hhc file, .hhk file. When I compile another file B.chm from these extracted files without changing any of the files.((I want to add more html contents to this file but looks like I am losing some information after decompiling)) The output file I get is 72K where as the original file was 75K. B.chm's contents look all file when viewed in the chm viewer but the behavior is lost when when used with the application. After reading around I found that if .hhp can be extracted from a .chm file then it can be re-constructed as it is without losing any mapping or aliases. Is that true? How can I extract .hhp file from a .chm file? Thanks, Sam

    Read the article

  • Using the Windows Explorer Context Menu to reset Umbraco Directory Permissions

    - by Vizioz Limited
    Hi All,As Umbraco matures I am assuming that needing to reset directory permissions might well become a thing of the past, but at the moment it is still something when I copy sites between machines that I often find myself doing.As it's 4:30am I thought, there must be a better way than having to open up a DOS prompt, navigate to a directory and then run a batch file passing in the IIS root folder location.Well.. there is :)I googled for adding a command to the context menu within Windows Explorer, I found a way of doing this for XP, but it seems the functionality was removed from Windows 7, however I found a very neat freeware application called File Menu Tools which does work perfectly!I have now added a command to my context menu that enables me to right click an IIS site root folder and then call my batch script and automatically pass in the directory.This will save me a bunch of time :)

    Read the article

  • VirtualBox Share permissions

    - by Eduardo Oliveira
    at the moment i'm running ubuntu server 12.04 on top of a virtual box running as service in M$ w7x64, I've configured the shares on fstab with E /media/E vboxsf auto,rw,uid=1000,gid=1000 0 0 the mount is ok drwxrwxrwx 1 knoker knoker 16384 Sep 1 20:02 E but i cant change the permissions inside of /media/E ,chmod runs but doesn't do nothing. on a specific case /media/E/Docs/Downloads i have dr-xr-xr-x 1 knoker knoker 163840 Sep 1 19:30 Downloads and since i cant seem to be able to change from linux i tried it on the windows, and all users listed have full control... Any ideas on what is happening? Best Regards, Knoker

    Read the article

  • How to manage permissions on a shared volume for OSX and ubuntu

    - by gentmatt
    On my mac I'm using an unjournaled HFS partition to share files between OSX 10.8 and Ubuntu 12.04. It was a nice thought at first, because Time Machine will automatically backup the volume in OSX, but I soon noticed that OSX and Ubuntu mess with the permission in a way that makes things messy for me. So, in order to fully view and change files, I keep using chmod to apply permissions that which will allow me to fully use a document. But I don't understand why I have to keep applying changes over and over. Is possible to set some kind of permission permanently so that both operating systems will respect permanently? I guess 777 will work, but I thought that this is not a smart thing to do. But as long as 'others' does not get full access (third seven), I see a lock icon on the file in ubuntu.

    Read the article

  • Photo Gallery Software - reads from a local directory - watches folder- user and group permissions

    - by Darkflare
    Use Case: Photos are organised in a folder structure by date (by software lightroom/picasa) Want to run a local webserver to host a web gallery from (already know how to run lamp etc) I want to be able to attach metadata to the photos (probably through a database not residing in the photos folder) such that they can be tagged/categoried/albumed without affecting the original photos I want to be able to assign permissions to different albums to set users I want the software to watch the photo source folder for changes so that new photos are indexed ready for applying metadata and albums. I'd like the software to handle the rendering of numerous file types (photo formats) as well as video formatts I am language agnostic so php/python heck even c#, just want software that forfills the requirements. The main reason I am asking this question here as I am unsure what this software would even be called so google searching is quite difficult! Thanks for reading.

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

  • Expanding iSCSI LUNs (NTFS)

    - by Fatih
    I have a 4TB iSCSI LUN that formated as NTFS in Windows 2008. I've shared this formated volume as a folder over SMB. When the capacity of this volume is not enough, I have to add more iSCSI LUNs, but the end-users must see only the folder that I've shared before. So, when I expand the NTFS volume that is currently 4TB, with more iSCSI LUNS(for example 2 more 4TB LUN), if one of the luns is failed, or missing, will all of my data in the folder be lost? I imagine that the expanding ntfs volume is like RAID 0(striped). if it is like RAID 0, then all my data will be lost when one of the luns is failed, or missing. In brief, there are two questions in here: 1- What will be happened, if one of the luns is missing in an expanded ntfs volume? 2- Is there another way to merge all of iscsi luns as only a folder, in that way the users don't see any extra folder even if I add extra iscsi luns to the file server.(I don't mention about DFS) Regards.

    Read the article

  • nginx: rewrite a non-existent php-file to another php-file with all arguments

    - by at0m33
    i really need help here. Sitting for some time now and dont figured it out. I want to realize a very simple task - rewrite a non-existent php file to another existant php file with all arguments like: this http://example.com/nonexistent.php?url=google.com to -> http://example.com/existent.php?url=google.com I tried something like this: rewrite ^/nonexistent.php /existent.php; Which dont works (File not found). But redirect a non-existent html file to a php file like this: rewrite ^/nonexistent.html /existent.php; works. I dont want to rewrite a html file, but this is still a confusing behaviour. Therefore it tried also something like this (and some variations): rewrite ^/nonexistent.php?url=^(.*)$ /existent.php?url=$1; which is also not working. (Maybe the syntax is bad) Any help here? It would be very nice!

    Read the article

  • How to create a text file from column and FTP that text file to server

    - by addi
    I have workbook with 2 sheets (Sheet1 and sheet2). On sheet1 user will enter the data which will be populated in the column B and then column C will hold the values from Col A and B on sheet2. I need to create a text file from the values in coloumn C on a click of a button and then upload(FTP) that file to a server. So the sheet1 will have 2 buttons. Button1 will save the excel file and create the text file in windows temp directory. e.g text.xls text.prop (text file whoch has all the values in column C on sheet2) Button2 will upload (FTP) the text file (.prop) to a server. Can anyone please send me the steps and VB code to achieve the above tasks? Thanks in Advance Addi

    Read the article

  • iPhone file corruption

    - by sfider
    Is it possible (on iPhone/iPod Touch) for a file written like this: if (FILE* file = fopen(filename, "wb")) { fwrite(buf, buf_size, 1, file); fclose(file); } to get corrupted, e.g. when app is forced to terminate? From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.

    Read the article

  • Is O_NONBLOCK being set a property of the file descriptor or underlying file?

    - by Daniel Trebbien
    From what I have been reading on The Open Group website on fcntl, open, read, and write, I get the impression that whether O_NONBLOCK is set on a file descriptor, and hence whether non-blocking I/O is used with the descriptor, should be a property of that file descriptor rather than the underlying file. Being a property of the file descriptor means, for example, that if I duplicate a file descriptor or open another descriptor to the same file, then I can use blocking I/O with one and non-blocking I/O with the other. Experimenting with a FIFO, however, it appears that it is not possible to have a blocking I/O descriptor and non-blocking I/O descriptor to the FIFO simultaneously (so whether O_NONBLOCK is set is a property of the underlying file [the FIFO]): #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char **argv) { int fds[2]; if (pipe(fds) == -1) { fprintf(stderr, "`pipe` failed.\n"); return EXIT_FAILURE; } int fd0_dup = dup(fds[0]); if (fd0_dup <= STDERR_FILENO) { fprintf(stderr, "Failed to duplicate the read end\n"); return EXIT_FAILURE; } if (fds[0] == fd0_dup) { fprintf(stderr, "`fds[0]` should not equal `fd0_dup`.\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should not have `O_NONBLOCK` set.\n"); return EXIT_FAILURE; } if (fcntl(fd0_dup, F_SETFL, fcntl(fd0_dup, F_GETFL) | O_NONBLOCK) == -1) { fprintf(stderr, "Failed to set `O_NONBLOCK` on `fd0_dup`\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should still have `O_NONBLOCK` unset.\n"); return EXIT_FAILURE; // RETURNS HERE } char buf[1]; if (read(fd0_dup, buf, 1) != -1) { fprintf(stderr, "Expected `read` on `fd0_dup` to fail immediately\n"); return EXIT_FAILURE; } else if (errno != EAGAIN) { fprintf(stderr, "Expected `errno` to be `EAGAIN`\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } This leaves me thinking: is it ever possible to have a non-blocking I/O descriptor and blocking I/O descriptor to the same file and if so, does it depend on the type of file (regular file, FIFO, block special file, character special file, socket, etc.)?

    Read the article

  • File system query

    - by Balaji
    Is there an easy way to query data in file system? We are storing data in File system (instead of database) Is there a way to query the content of the file system? The data in the file system is stored in xml format. since the data is growing day by day we are finding it difficult to query the content of the files in the file system. Can anyone suggest what could be the tool/method to query the data in the existing file system?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >