Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 172/665 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Does visual source safe take .cvsignore as configuration ?

    - by superuser
    An easy way to tell CVS to ignore these directories is to create a file named .cvsignore (note the leading period) in your top-level source directory Has anyone verified this with vss? Plus,does vss have these similar command lines: * To refresh the state of your source code to that stored in the the source repository, go to your project source directory, and execute cvs update -dP. * When you create a new subdirectory in the source code hierarchy, register it in CVS with a command like cvs add {subdirname}. * When you first create a new source code file, navigate to the directory that contains it, and register the new file with a command like cvs add {filename}. * If you no longer need a particular source code file, navigate to the containing directory and remove the file. Then, deregister it in CVS with a command like cvs remove {filename}. * While you are creating, modifying, and deleting source files, changes are not yet reflected in the server repository. To save your changes in their current state, go to the project source directory and execute cvs commit. You will be asked to write a brief description of the changes you have just completed, which will be stored with the new version of any updated source file.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • How to bind old user's SID to new user to remain NTFS file ownership and permissions after freshly reinstall of Windows?

    - by LiuYan ??
    Each time we reinstalled Windows, it will create a new SID for user even the username is as same as before. // example (not real SID format, just show the problem) user SID -------------------- liuyan S-old-501 // old SID before reinstall liuyan S-new-501 // new SID after reinstall The annoying problem after reinstall is NTFS file owership and permissions on hard drive disk are still associated with old user's SID. I want to keep the ownership and permission setting of NTFS files, then want to let the new user take the old user's SID, so that I can access files as before without permission problem. The cacls command line tool can't be used in such situation, because the file does belongs to new user, so it will failed with Access is denied error. and it can't change ownership. Even if I can change the owership via SubInACL tool, cacls can't remove the old user's permission because the old user does not exist on new installation, and can't copy the old user's permission to new user. So, can we simply bind old user's SID to new user on the freshly installed Windows ? Sample test batch @echo off REM Additional tools used in this script REM PsGetSid http://technet.microsoft.com/en-us/sysinternals/bb897417 REM SubInACL http://www.microsoft.com/en-us/download/details.aspx?id=23510 REM REM make sure these tools are added into PATH set account=MyUserAccount set password=long-password set dir=test set file=test.txt echo Creating user [%account%] with password [%password%]... pause net user %account% %password% /add psgetsid %account% echo Done ! echo Making directory [%dir%] ... pause mkdir %dir% dir %dir%* /q echo Done ! echo Changing permissions of directory [%dir%]: only [%account%] and [%UserDomain%\%UserName%] has full access permission... pause cacls %dir% /G %account%:F cacls %dir% /E /G %UserDomain%\%UserName%:F dir %dir%* /q cacls %dir% echo Done ! echo Changing ownership of directory [%dir%] to [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q echo Done ! echo RunAs [%account%] user to write a file [%file%] in directory [%dir%]... pause runas /noprofile /env /user:%account% "cmd /k echo some text %DATE% %TIME% > %dir%\%file%" dir %dir% /q echo Done ! echo Deleting and Recreating user [%account%] (reinstall simulation) ... pause net user %account% /delete net user %account% %password% /add psgetsid %account% echo Done ! %account% is recreated, it has a new SID now echo Now, use this "same" account [%account%] to access [%dir%], it will failed with "Access is denied" pause runas /noprofile /env /user:%account% "cmd /k cacls %dir%" REM runas /noprofile /env /user:%account% "cmd /k type %dir%\%file%" echo Done ! echo Changing ownership of directory [%dir%] to NEW [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q cacls %dir% echo Done ! As you can see, "Account Domain not found" is actually the OLD [%account%] user echo Deleting user [%account%] ... pause net user %account% /delete echo Done ! echo Deleting directory [%dir%]... pause rmdir %dir% /s /q echo Done !

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • Using mod_rewrite to mask /cgi-bin/abc as /def

    - by Alois Mahdal
    I have a seemingly easy task, but somehow I just can't get it to work: Some interesting lines from my httpd.conf: ... DocumentRoot "D:/opt/apache/htdocs" ... ScriptAlias /cgi-bin/ "D:/opt/apache/cgi-bin/" ... <Directory "D:/opt/apache/htdocs"> Options Indexes FollowSymLinks ExecCGI AllowOverride None Order allow,deny Allow from all </Directory> <Directory "D:/opt/apache/cgi-bin/"> AllowOverride None Options ExecCGI Order allow,deny Allow from all </Directory> (I know it's dumb but it's only a testing machine :D.) Now, I have d:\opt\apache\cgi-bin\expired.pl and I expect GET /licensecheck.php?code=123456. And I wish to fake client into thinking it speaks with /licensecheck.php, but actually return data by \expired.pl. What I tried was setting following at the end of http.conf: RewriteEngine on RewriteRule ^/licensecheck.php$ /cgi-bin/expired.pl [T=application/x-httpd-cgi,L] ...but it keeps 404-ing me, looking for cgi-bin directory (not cgi-bin\expired.pl) in my DocumentRoot! [error] [client 127.0.0.1] script not found or unable to stat: D:/opt/apache/htdocs/cgi-bin /cgi-bin/expired.pl and all other scripts in /cgi-bin/ work as expected, Only way I could make it work was actually putting the \expired.pl to DocumentRoot, but I don't want this, I want my cgi-bin neatly separated :)

    Read the article

  • Running phpmyadmin and suphp

    - by thor
    I have a Debian Lenny web server. It is running apache2 with libapache2-mod-suphp. Unfortunately, suphp makes impossible to use phpmyadmin, as phpmyadmin is installed in /usr/share/phpmyadmin and owned by root, and suphp disables it's enging in this direcory: $ cat /etc/apache2/mods-enabled/suphp.conf <IfModule mod_suphp.c> AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml suPHP_AddHandler application/x-httpd-php <Directory /> suPHP_Engine on </Directory> # By default, disable suPHP for debian packaged web applications as files # are owned by root and cannot be executed by suPHP because of min_uid. <Directory /usr/share> suPHP_Engine off </Directory> </IfModule> Is there a possibility to enable system phpmyadmin (may be through standard libapache2-mod-php5) while using suphp? How?

    Read the article

  • Virtualmin deactivating PHP on new virtual servers

    - by Josh
    This is related to my other question... but the situation is much worse now. After updating to the most recent version of Virtualmin, when I create new accounts, Virtualmin sets up their VirtualHost entries as follows: <Directory /home/username/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/username/fcgi-bin/php.fcgi .php </Directory> <Directory /home/username/cgi-bin> allow from all </Directory> [...] RemoveHandler .php Now, not only is it specifically inserting AddHandler fcgid-script and FCGIWrapper... which I do not want because I am using mod_fastcgi, but it's also setting up PHP in such a way that it will never work! It's adding a RemoveHandler .php after setting up the handler for PHP! Where is this behavior configured and how can I stop it? Better yet, how can I make Virtualmin not include any PHP commands at all in the VirtualHost section?

    Read the article

  • httpd, vsftpd and the annoying selinux

    - by Christian
    I have a CentOS 6.3 installed with httpd running and vsftpd but I am unable to balance permission between the user able to upload over ftp and their website working. What I do: I create a user with their home directory as `/home/username` I create a sub folder called `html` for their website I chown their directory `chown -R username:apache /home/username` I chmod their directory `chmod -R 750 /home/username` I chcon their directory `chcon -R -t httpd_sys_rw_content_t /home/username` and their website loads fine but they are unable to ftp, but if I do the following, they can ftp but their website doesnt load: chcon -R -t user_home_dir_t /home/username If I disable selinux, the user can ftp and the website loads. so what is the answer to keep selinux?

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • How to fix boot and mount failed drops to initramfs prompt in Ubuntu 12.04?

    - by msPeachy
    Ubuntu partition does not boot. This started after a power interruption during system boot. The next time I boot, I encountered the following error message: mount: mounting /dev/disk/by-uuid/3f7f5cd9d-6ea3-4da7-b5ec-**** on /root failed: Invalid argument mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target file system doesn't have /sbin/init. No init found. Try passing init= bootarg. Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _ I've searched for similar posts here and most of the recommended solution is to reboot to the Ubuntu LiveCD. That's another problem because I cannot boot to a LIVEUSB, this is the error message I get when booting to a LiveUSB: Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/sda2 on /isodevice failed: Invalid argument Could not find the ISO /ubuntu-12.04-desktop-i386.iso. This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into Windows, let it fully start, log in, run 'chkdsk /r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation. I cannot boot into Windows because I don't have a Windows partition. Do I have to install Windows to fix this problem? Is there a way to fix this in the (initramfs) prompt? Please help. Thank you!

    Read the article

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • Apache 2.2.14: SSLCARevocation location

    - by Doc
    I am installing a .crl in my apache config. It looks like this: VirtualHost default DocumentRoot "web" ServerName example.com SSLEngine on SSLCertificateFile "cert.crt" SSLCertificateKeyFile "key.key" SSLCertificateChainFile "cert.ca-bundle" SSLProtocol -all +SSLv3 SSLCipherSuite SSLv3:+HIGH:+MEDIUM Directory Order deny,allow Allow from all SSLCACertificateFile "ClientRootCert.crt" SSLVerifyClient require SSLVerifyDepth 3 SSLCARevocationFile "CRLList.crl" Directory VirtualHost When Apache is started, I get the error: SSLCARevocationFile not allowed here When I place SSLCARevocationFile above the Directory tag, Apache starts, but all client certs are rejected with the message: ssl_error_expired_cert_alert (both revoked and active certs) How to solve this?

    Read the article

  • debian/ubuntu locales and language settings

    - by AndreasT
    This self-answered question solves the following issues: locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory and some other locale related problems.

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • How can I call a URL as a cron job in Webmin?

    - by EmmyS
    (Possibly this belongs on stackoverflow, although it's not really a programming issue since the code works when run directly. If it needs to be moved, though, no problem.) I have a PHP file (which consumers a National Weather Service web service via SOAP, if it matters) that I need to run on a scheduled basis. I'm trying to set up a cron job in Webmin. If I use an absolute path to the file in the Command field, when I run it I get some strange errors: /var/www/html/mysite.com/test/ndfdXMLclient.php: line 1: ?php: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 2: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 3: //DOCUMENTATION: No such file or directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 4: //: is a directory /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: syntax error near unexpected token `"running client code",' /var/www/html/mysite.com/test/ndfdXMLclient.php: line 5: `error_log("running client code", 1, "[email protected]");' The actual code in my file for those 5 lines looks like this: <?php // *************************************************************************** //DOCUMENTATION FROM WEATHER.GOV ALL STORED IN xmlClientComments.txt // *************************************************************************** error_log("running client code", 1, "[email protected]"); The code runs perfectly fine when I run it directly in my browser, so why doesn't webmin recognize it as code? (The same thing happens if I enter the actual URL in the command field - http://mysite.com/test/ndfdXMLclient.php.) I've never worked with webmin before; most of our hosts' cron control panels allow cron jobs to run PHP files like this with no issue. Is there some trick to getting webmin to read php as actual php?

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • How to download all file content from a folder using wget and http

    - by user1526912
    I am trying to use wget and http to download all contents from folderAA below to directory /root/sstest wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/ When I submit the above command nothing is downloaded. If I submit a wget request for a specific file from folderAA the file is actually downloaded to /root/sstest: wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/file.txt Can someone tell me why I cannot download all file content from folderAA at once using the first wget request?

    Read the article

  • apache php access rights configuration

    - by AndreasT
    Hi, I am a complete apache and co newb. Currently it serves only the default page. On the default page, the user can not list the directory or files. When I create a directory, say /var/www/foobar and place files in it, the user can by doing: www.mydomain.org/foobar see the contents of the directory. I run pretty much the default configuration. on Directory "/", I have FollowSymlinks and AllowOverride(none) on what DocumentRoot points to I have Indexes FollowSymlinks MultiViews and "allow from all" set. My questions are: Can I stop people from listing subdirectories? Can people, if I do not change the configuration, in some way read the php files in there? (I mean not the rendered page, I mean the .php page source.) Pointers to good resources about this would also be nice. Thx in Advance.

    Read the article

  • Are my Linux symbolic links acting correctly?

    - by Andy Castles
    Hi all I've been using Linux on and off for the last 15 years and today I came across something in bash that surprised me. Setup the following directory structure: $ cd /tmp $ mkdir /tmp/symlinktest $ mkdir /tmp/symlinktest/dir $ mkdir /tmp/symlinktarget Now create two sym links in symlinktest pointing to symlinktarget: $ cd /tmp/symlinktest $ ln -s ../symlinktarget Asym $ ln -s ../symlinktarget Bsym Now, in bash, the following tab completion does strange things. Type the following: $ cd dir $ cd ../A[TAB] Pressing the tab key above completes the line to: $ cd ../Asym/ as I expected. Now press enter to change into Asym and type: $ cd ../B[TAB] This time pressing the tab key completes the link to: $ cd ../Bsym[space] Note that there is now a space after the Bsym and there is no trailing slash. My question is, why when changing from the physical directory "dir" to Asym it recognises that Asym is a link to a directory, but when changing from one sym link to another, it doesn't recognise that it's a link to a directory? In addition, if I try to create a new file within Asym, I get an error message: $ cd /tmp/symlinktest/Asym $ cat hello > ../Bsym/file.txt -bash: ../Bsym/file.txt: No such file or directory I always thought that symlinks were mostly transparent except to programs that need to manipulate them. Is this normal behaviour? Many thanks, Andy

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • How to exclude a specific URL from basic authentication in Apache?

    - by ripper234
    Two scenarios: Directory I want my entire server to be password-protected, so I included this directory config in my sites-enabled/000-default: <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Directory> The question is how can I exclude a specific URL from this? Proxy I found that the above password protection doesn't apply to mod_proxy, so I added this to my proxy.conf: <Proxy *> Order deny,allow Allow from all AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Proxy> How do I exclude a specific proxied URL from the password protection? I tried adding a new segment: <Proxy http://myspecific.url/> AuthType None </Proxy> but that didn't quite do the trick.

    Read the article

  • Declaring multiple ports for the same VirtualHosts

    - by user65567
    Declare multiple ports for the same VirtualHosts: SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> How can I declare a new port ('listen', ServerName, ...) for 'domain.localhost'? If I add the following code, apache works (too much) also for all other subdomain of 'domain.localhost' (subdomain1.domain.localhost, subdomain2.domain.localhost, ...): <VirtualHost *:80> ServerName pjtmain.localhost:80 DocumentRoot "/Users/Toto85/Sites/pjtmain/public" RackEnv development <Directory "/Users/Toto85/Sites/pjtmain/public"> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Why not install Msvcr71.dll into system32?

    - by hillu
    While looking for an authoritative source for the missing Msvcr71.dll that is needed by a few old applications, I stumbled across the MSDN article Redistribution of the shared C runtime component in Visual C++. The advice given to developers is to drop the DLL into the application's directory instead of system32 since DLLs in this directory are considered before the system paths. What can/will go wrong if I (as an administrator, not a developer) decide to take the lazy path and install Msvcr71.dll (and Msvcp71.dll while I'm at it) into the system32 directory (of 32 bit Windows XP or Windows 7 systems) instead of putting a copy in each application's directory? Is there another good solution to provide the applications with the needed DLLs that doesn't involve copying stuff to the application directories?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >