Search Results

Search found 20029 results on 802 pages for 'directory permissions'.

Page 243/802 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • How to exclude a specific URL from basic authentication in Apache?

    - by ripper234
    Two scenarios: Directory I want my entire server to be password-protected, so I included this directory config in my sites-enabled/000-default: <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Directory> The question is how can I exclude a specific URL from this? Proxy I found that the above password protection doesn't apply to mod_proxy, so I added this to my proxy.conf: <Proxy *> Order deny,allow Allow from all AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Proxy> How do I exclude a specific proxied URL from the password protection? I tried adding a new segment: <Proxy http://myspecific.url/> AuthType None </Proxy> but that didn't quite do the trick.

    Read the article

  • Declaring multiple ports for the same VirtualHosts

    - by user65567
    Declare multiple ports for the same VirtualHosts: SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> How can I declare a new port ('listen', ServerName, ...) for 'domain.localhost'? If I add the following code, apache works (too much) also for all other subdomain of 'domain.localhost' (subdomain1.domain.localhost, subdomain2.domain.localhost, ...): <VirtualHost *:80> ServerName pjtmain.localhost:80 DocumentRoot "/Users/Toto85/Sites/pjtmain/public" RackEnv development <Directory "/Users/Toto85/Sites/pjtmain/public"> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • rm command and regular expressions via Linux BASH shell

    - by PeanutsMonkey
    I am attempting to use regular expressions to remove set of files however the bash shell returns the message rm: cannot remove `[0-99]+ -': No such file or directory rm: cannot remove `[a-zA-Z': No such file or directory rm: cannot remove `]+.[a-z]+': No such file or directory The command is [0-99]+\ - [a-zA-Z ]+\.[a-z]+ Questions Can I use regular expressions? If yes, how do I use them with commands such as rm, mkdir, etc

    Read the article

  • Can you mount a sysprep image using DISM

    - by Tester123
    I created a script to mount a sysprepped Windows 7 image to a directory so I can edit a specific file in the image and then unmount it. The script seems to work just fine however, each time I try I seem to be getting some sort of error about the Image. Errors such as: The image is supposedly damaged or corrupted The image mounts but nothing appears in the directory So I guess the overall question is it possible to mount an syprepped Windows 7 Wim Image into a directory?

    Read the article

  • Why not install Msvcr71.dll into system32?

    - by hillu
    While looking for an authoritative source for the missing Msvcr71.dll that is needed by a few old applications, I stumbled across the MSDN article Redistribution of the shared C runtime component in Visual C++. The advice given to developers is to drop the DLL into the application's directory instead of system32 since DLLs in this directory are considered before the system paths. What can/will go wrong if I (as an administrator, not a developer) decide to take the lazy path and install Msvcr71.dll (and Msvcp71.dll while I'm at it) into the system32 directory (of 32 bit Windows XP or Windows 7 systems) instead of putting a copy in each application's directory? Is there another good solution to provide the applications with the needed DLLs that doesn't involve copying stuff to the application directories?

    Read the article

  • Copy images using a single dos command

    - by Haroon
    Hi guys, I'm wondering if it's possible to copy only images files from a directory. For example, if source directory has: a.jpg b.gif c.png d.txt I want to copy only image (using one command), to get this in the destination directory: a.jpg b.gif c.png

    Read the article

  • reset locale in debian under Squeeze

    - by si2w
    I have problems with locale in debian. I tried many thing but it doesn't anything for me : locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory C POSIX en_US.utf8 I try to set en_US.utf8 without success with this :dpkg-reconfigure locales -plow perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory /usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory /usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales (this might take a while)... en_US.UTF-8... done Generation complete. perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). After reboot, i try to use a perl script : perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = "en_US", LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Here is my /etc/default/locale config file : cat /etc/default/locale LANG=en_US.UTF-8 LANGUAGE=en_US Any idea to solve this (stupid) problem ? Thanks

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Descending list ordered by file modification time

    - by user62367
    How can i generate a list of files in a directory [e.g.: "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end] ls -A -lRt would be great: https://pastebin.com/raw.php?i=AzuSVmrJ but if a file is changed in a directory it lists the full directory...so the pastebined link isn't good [i don't want a list ordered by "directories", i need a "per file" ordered list] os: openwrt..[no perl - not enough space for it :( + no "stat", or "file" command] Thank you!

    Read the article

  • Apache2 SSL And Passenger Configuration Issue

    - by Aditya Manohar
    I have the following virtual hosts configuration blocks. <VirtualHost *:80> DocumentRoot /var/www/html/TestApp/public/ <Directory /var/www/html/TestApp/public/> Allow from all Options -MultiViews </Directory> </VirtualHost> NameVirtualHost *:443 <VirtualHost *:443> DocumentRoot /var/www/html/TestApp/public/ <Directory /var/www/html/TestApp/public/> Allow from all Options -MultiViews </Directory> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/server.crt SSLCertificateKeyFile /etc/pki/tls/private/server.key </VirtualHost> I trying to serve a Rails App off Passenger on Apache. The Problem: The TestApp works fine with Apache and Passenger when not using SSL When I use https://, I see the contents of /var/www/html The path to TestApp is /var/www/html/TestApp Any help will be much appreciated.

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • Descending list ordered by file modification time

    - by LanceBaynes
    How can I generate a list of files in a directory [for example, "/mnt/hdd/PUB/"] ordered by the files modification time? [in descending order, the oldest modified file is at the lists end] ls -A -lRt would be great: https://pastebin.com/raw.php?i=AzuSVmrJ But if a file is changed in a directory, it lists the full directory, so the pastebined link isn't good [I don't want a list ordered by "directories", I need a "per file" ordered list] OS: OpenWrt [no Perl - not enough space for it :( + no "stat", or "file" command].

    Read the article

  • how do I set up a virtual host (it's not working, and I've done everything right)

    - by piratepartypumpkin
    My router redirects port 80 to port 8080. My router works fine and my domain name is routed properly. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Giving a permission to write and read from /var/www

    - by mako
    I need that directory, as I want to put my sites there, so that apache can run them.. It is my virtual directory path.. and I am new to linux.. I just want to read and write from that directory.. How do I enable creating/saving/reading files/folders from that directory? What command do I give? I tried a few, but I think I need to be a super user to make the folder writtable readable. Note that I dont care about security.

    Read the article

  • Configuring ASP.NET web site in IIS 6.0

    - by Paul Knopf
    I have installed IIS and .NET 4.0 on Windows Server 2003. I have a web ready website that that targets .NET 4.0 and have updated the default website home directory to map to the website's directory. When I visit the website in a web browser (localhost, localhost:80), I get a 404 error (File or directory not found). Here is the IP address so you can see for yourself. http://72.45.244.92/ How do I get my ASP.NET 4.0 website to run?

    Read the article

  • Quick and dirty user management service for Linux VMs?

    - by quack quixote
    Background I have a home server running Debian, and a workstation that runs various VirtualBox VMs (mostly Linuxen but some Windows). At the moment, I'm creating my main user account anew for every new Linux VM. I'd like to make use of a centralized user-management scheme instead, so I can just configure the new VMs for the directory technology and let them handle user lookups automatically. The last time I worked with anything like this, NIS+ was still in fashion. I have a vague notion of what LDAP and Active Directory are, but no knowledge of how to configure them for what I want. Question What user-management/network-directory technology should I use for providing user accounts to my network? The server must run on Debian Lenny. Client configuration should be simple point-at-server-and-go. I need an example configuration for one sample user account. (nice-to-have) I may want to mount the user's home directory from the server. (nice-to-have) The same configuration works with Windows clients.

    Read the article

  • Debian 6: setting up FTP just for website editing

    - by David Oliver
    I have a VPS using Debian 6.0. Currently, SSH is set to not accept password logins, and only key-based ones. A person who needs to work on one particular website (a vhost) wishes to use FTP. He doesn't need/want SSH. How can I set up FTP access for him, enabling him to have write permissions for all files in the relevant directory, and only the relevant directory? The directory is /srv/www/domainname.com/public_html Currently, all directories and files in that directory belong to www-data:www-data and are 644/755. I've installed vsftpd and have been reading some guides, but they all seem to deal with allowing multiple users to have their own user-named directories which isn't what I'm after. I can't seem to work out how to simply define one FTP user with a password that has access to one directory of my choosing. This is my first experience of setting up an FTP server. Thanks. Edit: have also found this - maybe I should be using ProFTPd, or can vsftpd also do what I want?

    Read the article

  • greengeeks drupal install imagemagik 'path /usr/bin/convert' does not exists error

    - by letapjar
    I just signed up with greengeeks. I have a drupal install (6.19) on my public_html directory. The ImageMagic Toolkit can't find the binary - the error I get is "the path /usr/bin/convert" does not exist. when I use a terminal and do 'which convert' it shows /usr/bin/convert also, I have a second drupal install in an addon domain - it's home directory is above the public_html directory (in a directory called '/home/myusername/addons/seconddomain') The drupal install in the addon domain finds the imagemagick binary just fine. I am at a total loss as to why the original install cannot find the binary. The tech support guys at greengeeks have no clue either. Any ideas of things to try?

    Read the article

  • cygWin connect by SSH using RSA key; ssh.exe couldn't create /home/user/.ssh

    - by Kirzilla
    I'm using Win XP and I'm trying to connect by SSH to remote host using RSA key. I've investigated that cygWin recognizes Documents and Settings dir as home directory Z:\app\cwRsync\bin>cygpath -H /cygdrive/c/Documents and Settings I've created .ssh directory in Documents and Settings/user/.ssh and moved known_hosts, id_rsa, id_rsa.pub there. Now, I'm trying to connect via ssh.exe to remote host Z:\app\cwRsync\bin>ssh -p 22 [email protected] Could not create directory '/home/user/.ssh'. The authenticity of host '[remotehost.com]:22 ([remotehost.com]:22)' can't be established. RSA key fingerprint is f7:f4:2c:e0:c6:7e:d2:a4:45:70:63:df:bf:f2:84:46. Are you sure you want to continue connecting (yes/no)? What I'm doing wrong? Why ssh.exe couldn't create directory /home/user/.ssh? Thank you.

    Read the article

  • Why not install Msvcr71.dll into system32?

    - by hillu
    While looking for an authoritative source for the missing Msvcr71.dll that is needed by a few old applications, I stumbled across the MSDN article Redistribution of the shared C runtime component in Visual C++. The advice given to developers is to drop the DLL into the application's directory instead of system32 since DLLs in this directory are considered before the system paths. What can/will go wrong if I (as an administrator, not a developer) decide to take the lazy path and install Msvcr71.dll (and Msvcp71.dll while I'm at it) into the system32 directory (of 32 bit Windows XP or Windows 7 systems) instead of putting a copy in each application's directory? Is there another good solution to provide the applications with the needed DLLs that doesn't involve copying stuff to the application directories? added after first answers: I understand that incompatible API changes may have been made to the mentioned DLLs, but pretty much every mention of incompatibilities I have found using Google had to do with games or video codecs. Right now, I expect that the risk of breakage is pretty small. Am I missing something?

    Read the article

  • Too Many Files In Debian Linux Folder?

    - by Dave Potts
    I've been using an external USB drive on a Debian server for backup. The drive is formatted as NTFS and mounted with ntsfmount. This was working fine, but I was filling up a directory with lots of files. Eventually the backup failed. When I then tried to look at the directory using ls it reported: ls: reading directory .: Numerical result out of range Looking in syslog, I also saw this: Sep 23 07:35:31 tosh ntfsmount[28040]: Failed to read index block: Numerical result out of range. Is this simply that I've reached the upper limit of number of files in a directory? If so, is there any way to extend the number of allowed files?

    Read the article

  • RewriteRule in htaccess in subdirectory

    - by Jay
    Windows server, running Apache. In my Apache conf, I have AllowOverride None for the root of a site and then I have a subdirectory set to AllowOverride All: <Directory /> AllowOverride None </Directory> <Directory "/safe/"> AllowOverride All </Directory> However, when I try to set up a rewrite rule in the subdirectory's htaccess file, nothing happens, I just get a 404 page not found error. Example: RewriteEngine On RewriteRule (.*) /blah?test=$1 [R=302,NC,NE,L] Rwewriting URLs are working fine from the root via the Apache conf. I don't understand why the rule is ignored. I don't want to do the URL re-writing within the conf because for this case I may need to be changing the redirects constantly and don't want to reload the server every time a change is made. I also don't want to affect server performance by enabling htaccess files site-wide, just in the subdirectory I need it.

    Read the article

  • 550 operation not permitted using FTP

    - by monkey_boys
    I'm using FTP to manage some files on a site I run but keep seeing this (truncated) error log: Command: DELE calendarpermission.php Response: 550 calendarpermission.php: Operation not permitted [...] Command: DELE button_down.gif Response: 550 button_down.gif: Operation not permitted Command: CWD /domains/example.com/public_html/admincp Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html/admincp" is the current directory Command: RMD control_examples Response: 550 control_examples: Operation not permitted Command: CWD /domains/example.com/public_html Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html" is the current directory Command: RMD admincp Response: 550 admincp: Operation not permitted Status: Retrieving directory listing... Command: PASV Response: 227 Entering Passive Mode (122,155,5,50,138,244). Command: MLSD Response: 150 Opening ASCII mode data connection for MLSD Response: 226 Transfer complete Status: Directory listing successful Status: Set permissions of '/domains/example.com/public_html/admincp' to '777' Command: SITE CHMOD 777 admincp Response: 550 CHMOD 777 admincp: Operation not permitted What do I do to solve this?

    Read the article

  • Is there a command to "manually automount" an attached disk?

    - by cheshirekow
    I have an extra hard drive which I use for backups. The label on its one and only partition is "backup". When I open nautilus and click on "backup" it mounds the drive in "/media/backup", and then there's a little eject button next to it's icon in nautilus. If I manually mount the drive by creating a directory and using "sudo mount /dev/sdx /some/dir", the eject icon still shows up in nautilus, but when I press it I get an error because the device was not mounted via whatever it is that mounts it the other way. What I would like is to be able to do this "mount to /media/backup and enable the eject button" via the command line. The goal is to have the device mounted by a script which needs the drive, but then leave it mounted until I manually eject it... if I want to. P.S. I'm aware that I can have the drive auto mounted at startup, but that's not what I'm looking for here, and I'd like to know if this is possible. Clarification: I'm looking for a command to "mount the drive the way nautilus would". This should create the directory "/media/backup", mount the device to that directory, and then when I press the eject button from nautilus, it should unmount the device and delete the directory.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >