Search Results

Search found 682 results on 28 pages for 'ownership'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • ColdFusion 9 server not restaring - “Permission denied” errors

    - by Xevi Pujol
    I had to restart my ColdFusion 9 server on CentOS because of a memory performance issue, but now the server won't restart again. When looking at cfserver.log I can see how there's "Permission denied" errors all along. The ColdFusion application folder (/opt/coldfusion9/) is owned by nobody:root, as that fixed a similar problem that we had a few weeks ago. Also, the last time this CF server was running correctly, the JRE user that was being used was nobody. Maybe CF is trying to restart using another user (presumably apache) and that creates permission issues? However, I'm not sure how to check this hypothesis. Where's the config file that tells CF what JRE user to utilize? If I can change that, I could try to specify nobody there. Any other ideas also welcome. UPDATE: The runtime user that Coldfusion will utilise is defined in /etc/init.d/coldfusion_9. I fixed the problem by being consistent with the users: I needed to revert the ownership of the folder /opt/coldfusion9/ back to apache:root, which matches the init file.

    Read the article

  • pgpoolAdmin keeps going straight back to it's login page

    - by user705142
    I'm trying to run pgpoolAdmin through nginx - it seems to be working properly, at least initially. I've gone through the initial set-up, which works fine, but now after logging in every link takes me straight back to the login page. It also shows japanese text instead of english, despite picking english in the installation. It seems to me just as if it was unable to save any user data, session information etc. I have javascript/cookies enabled, so it's not that. The ownership of the folder is nginx, and so too is pgmgt.conf.php, so it shouldn't be a problem with permissions. One potential issue is that I can't seem to see any confirmation that php postgresql support is enabled in the php info screen, despite the correct package installed and in the config line. Any ideas as to what's happening here? The nginx rules are pretty standard: server { # pg-pool admin listen 997; server_name localhost; root /opt/pgpooladmin; index index.php; location ~ .php$ { fastcgi_pass_header Set-Cookie; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; include fastcgi_params; } }

    Read the article

  • (13) Permission denied on Apache CGI attempt

    - by ncv
    I have recently upgraded my Apache2 server, and am now unable to run a CGI app. My logs are showing (13) Permission denied unable to connect to cgi deamon after multiple tries I understand that the error message means Apache is being denied some permission to some file, and I'm stumped as to how to track down and solve the problem. Is the file mentioned in the error message truly the blocked file? Or might the problem be caused by some other needed file? The .cgi file is right where it has always been, under /usr/share. The file ownership (root) and permissions (world readable/executable) are the same as they have always been for the file and its ancestors. The SELinux file labels are unchanged. The SELinux audit log shows no denial associated with Apache nor the CGI program. In case of a donotaudit condition, I enabled audit, but still saw nothing. I switched SELinux into permissive mode briefly, to no avail. I even tried restarting Apache while in permissive mode. This did not solve the problem. Any suggestions on how to solve this problem? I'm tempted to just revert to the older Apache.

    Read the article

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

  • Error creating ODBC connection to SQL Server 2008 Express

    - by DavidB
    When creating a System DSN, I get the error: Connection failed: SQLState: '08001' SQL Server Error: 2 [Microsoft][SQL Server Native Client 10.0]Named Pipes Provider: Could not open a connection to SQL Server [2]. Connection failed: SQLState: 'HYT00' SQL Server Error: 0 [Microsoft][SQL Server Native Client 10.0]Login timeout expired I'm running Vista Home Premium 64-bit SP2, and installed SQL Server 2008 Express Advanced without errors. I'll be using the database locally for an app installed on the same PC. I'm able to successfully connect with SQL Server Management Studio using Windows Authentication (my Windows account is a member of local Administrators), and I can successfully create a database with default ownership (defaults to my Windows account). SQL Server Configuration Manager shows that Shared Memory, TCP/IP, and Named Pipes are enabled for SQL Native Client 10.0 Configuration, SQL Native Client 10.0 Configuration (32bit), and SQL Server Network Configuration (SQLEXPRESS). The SQL Server (SQLEXPRESS) and SQL Server Reporting Services (SQLEXPRESS) services are running. When I create a system DSN, my driver choices are SQL server (sqlsrv32.dll 4-10-09), which gives a generic wizard, and SQL Server Native Client 10.0 (sqlncli10.dll 7-10-08), which gives the SQL Server 2008 wizard. I choose the latter. I enter name, description, and have tried both MyPCName and 127.0.0.1 for the server name (browsing turns up nothing). After clicking Next, I leave it at Integrated Windows authentication, and leave Connect to server for additional options checked. After clicking Next, I get the error above. I know it's probably a simple answer, (permission issue?) and I'm a SQL noob, so I appreciate anything that would point me in the right direction. Thanks!

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • Windows Photo Viewer can't open this picture because you don't have the correct permissions to access the file location

    - by Software Monkey
    My system in Windows 7 and fully up to date with all patches and options (except for Microsoft Silverlight, which I refuse to install). I get this error whenever I try to open an image using Windows Photo Viewer, such as when previewing from Explorer or when opening an image attachment to an email. I have already verified correct permissions to the file and all folders in the path. The strange thing is that every other program I have seems to open the images fine, including "Slideshow" from Windows Explorer. Even more strange, in WPV there is an "Open" menu that lists the other programs for images including GIMP and MS Paint and they open the very file that WPV is complaining about just fine. That should eliminate permissions as being the problem, especially since (logically at least) they are read/write while WPV is read-only. I have even edited and saved the images that WPV does not open. I am out of ideas, and searching for answer on the Web has resulted only in the same tired repitition of some flavor of "take ownership and reset permissions for the entire drive", which I have already done. And which is counter-indicated by the fact that only Windows Photo Viewer seems to have a problem. The one thing which is slightly unusual is that for normal files they are all on a second HDD mounted into C:, however for email attachments the temporary folder is C:\Temp\, which is directly on that drive.

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Limiting ssh user account only to access his home directory!

    - by EBAGHAKI
    By reading some tutorials online I used these commands: Make a local group: net localgroup CopsshUsers /ADD Deny access to this group at top level: cacls c:\ /c /e /t /d CopsshUsers Open access to the copSSH installation directory: cacls copssh-inst-dir /c /e /t /r CopsshUsers Add Copssh user to the group above: net localgroup CopsshUsers mysshuser /add simply put these commands will try to create a usergroup that has no permission on your computer and it only have access to the copSSH Installation directory. This is not true, since you cannot change the permission on your windows directory, the third command won't remove access to windows folder (it says access denied on his log). Somehow I achieved that by taking ownership of Windows folder and then i execute the third command so CopsshUsers has no permissions on windows folder from now on. Now i tried to SSH to the server and it simply can't login! this is kind of funny because with permission on windows directory you can login and without it you can't!! So if you CAN SSH to the server somehow you know that you have access to the windows directory! (Is this really true??) Simple task: Limiting ssh user account only to access his home directory on WINDOWS and nothing else! Guys please help!

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Set usergroup and persmissions ftp folder back to default

    - by OrangeTux
    I tried to create a new ftp user via the commandline. But I did something wrong and now I can access the server via FTP but I can't see any files. It doesn't make any sense wich user I'm using. ls -la drwxr-xr-x 13 root ftp 4096 2012-03-30 09:47 . drwxr-xr-x 7 web6 ftp 4096 2012-03-26 09:28 .. drwxr-xr-x 4 web6 ftp 4096 2012-03-26 13:31 actions drwxr-xr-x 2 web6 ftp 4096 2012-03-26 11:46 bin -rwxr-xr-x 1 web6 ftp 1520 2012-03-24 23:32 changelog.txt drwxr-xr-x 2 web6 ftp 4096 2012-03-26 13:30 css drwxr-xr-x 8 web6 ftp 4096 2012-03-24 22:43 external -rwxr-xr-x 1 web6 ftp 333 2012-03-26 15:12 .htaccess drwxr-xr-x 3 web6 ftp 4096 2012-02-27 15:07 images -rwxr-xr-x 1 web6 ftp 1606 2012-03-26 21:25 index.php drwxr-xr-x 2 web6 ftp 4096 2012-02-18 13:20 js drwxr-xr-x 2 web6 ftp 4096 2012-02-03 00:34 layout drwxr-xr-x 2 web6 ftp 4096 2012-03-29 23:35 library drwxr-xr-x 2 web6 ftp 4096 2012-03-30 09:47 log -rwxr-xr-x 1 web6 ftp 396 2012-03-24 15:04 menu.php drwxr-xr-x 2 web6 ftp 4096 2012-03-30 12:01 python drwxr-xr-x 2 web6 ftp 4096 2012-03-23 10:51 todo I can't see any dirs and files because I changed the groupowner or I the rights of the groupowner of the ftp dir. How can I set the ownership of the files back to default so I can access the files via FTP again?

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • Cannot delete flash9.ocx

    - by Kara Marfia
    Some kinda voodoo, indeed. Bought a new boot drive, and it's time to use the old drive for data. I thought I'd save some time by just wiping the unneeded system folders, instead of backing up, formatting, and restoring. Wups! I have a single Adobe file that absolutely will not be deleted. G:\Windows\SysWOW64\Macromed\Flash\flash9.ocx - though it may have started with a different file name. I'm able to rename it, oddly enough. To be clear, the drive is currently plugged in externally. So I can boot the computer, plug this drive in afterward, and immediately attempt to delete. "File in use" box reads "the action can't be completed because the file is open in another program". I'd format at this point, but it's my white whale, and I have to know if Adobe has inserted some nasty little registry hack - or whatever it is - making this impossible. Since I'm sure it'll come up, I've taken ownership of the file - and this was the trick preventing me from deleting anything else on the drive - full rights on the file permissions, you name it, I've fiddled with the file itself. I'm about to try uninstalling flash from the system drive, in case that aligns the planets properly. Sometimes I wish I were less stubborn, and could just format already.

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Windows File Access Denied

    - by Tom
    I seem to have a general problem with "access denied on Windows". It manifests itself every time if e.g: My bat file calls a compiler creates a file on disk My bat file renames a file But I also have files downloaded (FireFox) to Windows desktop where Windows is giving me "access denied" if I try delete the file. Tried disable AVG + make exception in AVG resident shield (I have tried checking with Task Manager + Winternals process explorer that it is not process running still running that should cause the locks.) Windows 7. My user account is an administrator. All files are created by same user account. The problem is recent, but some things I first noticed yesterday (when I started calling .bat files again which I have used for many years) I have tried: Starting e.g. Windows Explorer with "run as administrator", but that makes no difference right-click - properties - security and changes permissions/ownership (I also get "access denied" when trying this so this does not help) Here is a ascreenshot if I try change security of a "locked" file. (The problem here is the locking occurs continously every time the file is created) ! If I click on, it states I am not the owner? Which baffles me as I just created it. (Yes, through a .bat file calling executables that create the file. But all running under my administrator user account. Interestingly after having this dialog open, the file somehow sometimes suddenly seem to allow me delete it)

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

  • Linux NFS create mask and force user equivalent

    - by Mike
    I have two Linux servers: fileserver Debian 5.0.3 (2.6.26-2-686) Samba version 3.4.2 apache Ubuntu 10.04 LTS (2.6.32-23-generic) Apache 2.2.14 I have a number of Samba shares on fileserver so that I can access files from Windows PCs. I am also exporting /data/www-data to the apache server, where I have it mounted as /var/www. The setup is okay, except for when I come to create files on the NFS mount. I end up with files that cannot be read by Apache, or which cannot be modified by other users on my system. With Samba, I can specify force user, force group, create mask and directory mask, and this ensures that all files are created with suitable permissions for my Apache web server. I can't find a way to do this with NFS. Is there a way to force permissions and ownership with NFS - am I missing something obvious? Although I've spent quite a bit of time with Linux, and am weaning myself off Windows, I still haven't quite got to grip with Linux permissions... If this is not the right way to do things, I am open to alternative suggestions.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >