Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 222/692 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • IIS7 doesn't monitor changes across symlinks

    - by Matt Hensley
    I've used the mklink utility to create a symlink to a directory of web content. IIS7 doesn't "see" changes any classic ASP files in this linked directory without issuing an iisreset. I've disabled caching and file changes are picked up on other static files (such as .html) but .asp files are ignored.

    Read the article

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • rsync not writing files

    - by Cyrcle
    I'm trying to setup rsync to backup a remote directory to my local drive. I cd to the directory that I want to pull the files to, then I enter: rsync -vrtW [email protected]:~/public_html I enter the password then it starts running. I get all the files listed, but none of them actually transfer. What am I missing? Thanks

    Read the article

  • Mac OS X Lion and KeepassX

    - by Hallsie11
    I installed KeepassX on my new Mac which I am still learning the ropes on. I downloaded the program to allow autofill on Chrome ( KeePassHttp.plgx ). It says it needs to be put into directory of KeepassX, but I can't drag it into the application file and I'm not sure yet how to manipulate Terminal. Is terminal what I need to use or is there another way to put it in directory? At this point it is on the Chrome interface, but says it can't detect or make a connection with KX

    Read the article

  • cd Command Linux and Mystery Flags

    - by Jason R. Mick
    Platform: CentOS 6.2 Shell:tcsh I'm playing around with cd for a BASH script, and noticed the wondrous cd - option, but was left with many questions... Why the cd -? Isn't this redundant with cd ..? EDIT [As FatalError points out, these two commands don't do the same things... so the answer is "no"] Can you delve farther back into your history with - flag, a la in a browser? e.g. When I type cd -, it takes me to my previous directory, but then if I enter that command again, it takes me to the directory I just came from, creating a sort of loop. Is a shorthand for going back multiple levels supported?EDITI realize I can go back with cd .., but was hoping this could be a gateway to a less verbose deep back, e.g. cd -3 vs. cd ../../../ ... hopefully that clarifies what I'm asking....EDIT2As to the current feedback, while .. is a special directory, I don't see a reason why the built-in cd to the terminal couldn't use a shorthand for ../../ ... ../ e.g. cd ..5 or why the built-in also couldn't have a history (a la auto pushd/popd) that could be turned on and used like cd -3. I get that this could be somewhat of security/privacy risk, but I don't see how it's any worst than storing a command history, which most shells/terminals do. The manpage for cd, accessible via man cd and help cd (it's the same for either command), only lists -L and -P flags. However when I type in cd --help it outputs Usage: cd [-plvn][-|<dir>].. Am I right in assuming the other flags and the - (back) option are nonstandard? What are the -n and -v flags for? Both seem to take me back to my home directory, that's all I've been able to figure out via experimentation. A quick read on web resources [1][2] offered just the same sort of info that the man page did and didn't answer my questions. Note: The second Linux-centric resource above claimed cd only had two options (obviously not true in current CentOS) hence my assumption that this functionality could be non-standard.

    Read the article

  • Mass Checksumming tool for Windows?

    - by Daniel Magliola
    Hi, I'm looking for a command line tool for windows that will go over a directory tree (recursively) and output a list of all the files in there, and a checksum for each file (can be CRC, MD5, whatever). Esentially, what I want is to compare 2 big directory trees in 2 machines. I'm planning to take the outputs of running this tool in both, and diffing them to make sure they're identical. I appreciate any ideas.

    Read the article

  • Accidental Extract Location - How to Clean Up?

    - by Gordon
    Sometimes I will do a command such as unzip tons_of_files.zip And I will forget to put a -d to point to a subdirectory. This causes the current folder to get filled with tons of files that are intermixed with the existing files. What is the best way to remove all these new files and/or move them to a new directory? I want to avoid having to manually examine the directory and determine if the file was part of the archive or was already present.

    Read the article

  • How to make the new created filed with group rw permission

    - by Master
    I have centos 5.4. I in joomla website , when i install new php script then it created its own folder like images/scriptname Now that folder has only read and write permission for that user or apache user i don't know. I want that all the files created under /home/ directory has group write permission by default. I don't want specific folder or specific user but for all users folder inside /home directory Is it ok to do , or there is any other solution for that

    Read the article

  • How to use rsync when filenames contain double quotes?

    - by wfoolhill
    I am trying to synchronize the content of the directory my_dir/ from /home to /backup. This directory contains a file which name has a double quote in it, such as to"to. Here is my rsync command: rsync -Cazh /home/my_dir/ /backup/my_dir/ And I get the following message: rsync: mkstemp "/backup/my_dir/.to"to.d93PZr" failed: Invalid argument (22) For info, rsync works well when the synchronized filenames contain single quote, parenthesis and space. Thus, why is it bugging with a double quote? Thanks for any help.

    Read the article

  • Restarting or stopping apache results in waiting forever

    - by steko
    I have two simple WSGI apps running on top of mod_wsgi and apache2 on a test development server. There is no mod_python on this machine. The WSGI configuration is as follows WSGIDaemonProcess tops stack-size=524288 maximum-requests=5 WSGIScriptAlias /tops /home/ubuntu/tops-cloud/tops.wsgi <Directory /home/ubuntu/tops-cloud> WSGIProcessGroup tops WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> WSGIDaemonProcess flaskal maximum-requests=5 WSGIScriptAlias /c14 /home/ubuntu/c14/flaskal/flaskal.wsgi <Directory /home/ubuntu/c14/flaskal> WSGIProcessGroup flaskal WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> If I make changes to the app, I need to restart the web server, so I would expect that a simple sudo service apache2 restart does what I need. Same goes for any changes to the config (e.g. number of maximum requests, etc). Instead, it never ends "waiting", like this: $ sudo service apache2 restart * Restarting web server apache2 ... waiting .................................................. until I just do CTRL-C. At that point, the only way to resume a working server is to kill the process and restart it, not very convenient. The same happens with the stop command. The error logs at the "debug" level show the following lines after a failed restart [Wed Nov 14 21:55:19 2012] [notice] caught SIGTERM, shutting down [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Shutdown requested 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Stopping process 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Shutdown requested 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Stopping process 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Python has shutdown. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Python has shutdown. If I then try to restart again (with the process still running), I get the following error: * Restarting web server apache2 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. Unfortunately the Apache error log doesn't have anything. When apache2 is running properly, both apps work without any problem.

    Read the article

  • How to ensure precedence of files over directories with Apache?

    - by janeden
    My httpd.conf uses the MultiViews option to serve HTML files for URLs like http://server/blog. This works fine, unless there are directories with the same name – Apache will then try to serve the directory. Is there any way to ensure precedence of blog.html over blog/, or rather: can I make Apache process content negotiation according to MultiView although a matching entity (the directory) is present? In nginx, I can do this explicitly: try_files $uri $uri.html $uri/ =404;

    Read the article

  • Grab latest version of PHP by URL? [closed]

    - by luckytaxi
    I'm unable to find the directory listing of all PHP packages. I'm basically looking for the latest stable version of 5.3 utilizing some sort of script that will go out and check PHP's website. If I can get a directory listing (can't seem to find one), I can do the rest. Figured it out but seeing that someone else posted something similar but for Apache, he got blasted for puttig it on Stackoverflow. I figure I would save myself the trouble and post it here.

    Read the article

  • Should UNIX users have the same group

    - by jason
    I have a web server (Ubuntu 12.04 LTS if needed) multiple people use with Apache, PHP5, and MySQL installed. All users have access to SSH. All users home directory's are /home/USER I was wondering: What usergroup should users be in; or should they have their own usergroups()? What user and group should Apache run under? What file permissions should the users /home/USER/public_html and /home/USER directory's be, as well as subsequent files (including such PHP files w/ sensitive information such as DB passwords) Thanks :)

    Read the article

  • IIS, SSL, and Virtual Directories

    - by yodie
    I'm running a webserver on WS 2k3, IIS 6.0. Some of the content is on that server, but most is in a virtual directory linked to another server. Everything works (almost) fine when no SSL is used. However, when using SSL, I cannot access the files in the virtual directory. Instead I get a generic error 500. Any advice?

    Read the article

  • How do I change a character embedded in filenames to another character?

    - by PilotMom
    in a directory with multiple sub directories I need to change filenames that have the "_" character to a another character "." example Current filename: ABC12345_DEF Change to filename: ABC12345.DEF I need to do this recursively through a directory tree. the last three characters of the filename are not always the same. using rename wildcards on either side of the "_" or "." doesn't work (plus I need to do this through several directories)

    Read the article

  • Zipping only files using powershell

    - by SteB
    I'm trying to zip all the files in a single directory to a different folder as part of a simple backup routine. The code runs ok but doesn't produce a zip file: $srcdir = "H:\Backup" $filename = "test.zip" $destpath = "K:\" $zip_file = (new-object -com shell.application).namespace($destpath + "\"+ $filename) $destination = (new-object -com shell.application).namespace($destpath) $files = Get-ChildItem -Path $srcdir foreach ($file in $files) { $file.FullName; if ($file.Attributes -cne "Directory") { $destination.CopyHere($file, 0x14); } } Any ideas where I'm going wrong?

    Read the article

  • Get Forbidden running under admin

    - by Overdose
    I'm running on the localhost under admin login. And get forbidden everytime: [Mon Jun 07 19:17:40 2010] [error] [client 127.0.0.1] client denied by server configuration: C:/WebPages/ <Directory "C:/WebPages/*"> Order allow,deny Allow from all AddHandler wsgi-script .wsgi Options ExecCGI RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /cms66.wsgi/$1 [QSA,PT,L] </Directory>

    Read the article

  • Is it possible to view the contents of an underlying NFS mount without unmounting the NFS content?

    - by Brent
    I have a shared directory on a server - let's call it /home/shared - which is mounted with content from another server via nfs. When it is unmounted /home/shared is supposed to be empty - however, running du -x on the directory indicates that it is not empty. I cannot unmount the NFS content to inspect the mount point, since it is in use by others. Is there any way that I can view/edit the contents of the actual mount point (not the NFS content) while leaving the NFS content mounted for others to use?

    Read the article

  • Redirect to folder IIS 6

    - by Matthias
    I have a webpage ASP.NET in IIS 6. There a a lot of urls already indexed by google and links set in web-catalogs that looks like this www.mypage.com/directory1/page.aspx Now I changed this that the url looks like this: www.mypage.com/page.aspx I want the urls with the directory in the path to redirect to the urls without the directory, so that the links that are set and indexed can stay as they are. How can I achieve this with IIS 6.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Automatically copy files to USB drive when connected

    - by Daphna
    I am looking for a solution for copying all the files from a specific directory on the hard drive, to a specific directory on a USB memory device, once this device is connected. I have a program that downloads podcast episodes for me. I would like these files to be automatically moved (or at least copied) to my mp3 player once I connect it to the computer. I have both windows xp and linux machines, so a solution for any of them will work for me.

    Read the article

  • extracting files from tar

    - by shantanuo
    tar -xvf company_raw_2012-03-16.tgz --directory=/root/test --strip-components=4 I am using the following tar option to remove the leading directories and it is working as expected. --strip-components NUMBER strip NUMBER of leading components from file names before extraction It works only when I know that there are going to be 4 sub-directories. I have tar files and I do not know if there will be 2, 3 or 4 folders inside. How do I strip the entire path and extract files in the given "directory" path.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >