Search Results

Search found 45466 results on 1819 pages for 'config files'.

Page 501/1819 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • RewriteRule applying pattern even though 1 of the RewriteCond's failed

    - by BHare
    #www. domain . tld RewriteCond %{HTTP_HOST} (?:.*\.)?([^.]+)\.(?:[^.]+)$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$ /home/$1/client/media/$2 [L] RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$ /home/$1/www/$2 [L] Here is rewritelog output: #(4) RewriteCond: input='tfnoo.mydomain.org' pattern='(?:.*\.)?([^.]+)\.(?:[^.]+)$' [NC] => matched #(4) RewriteCond: input='/home/mydomain/' pattern='-d' => not-matched #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(2) rewrite 'http://www.mydomain.org/files/images/logo.png' -> '/home/mydomain/www/logo.png' If you note on the 2nd 4 it failed the -d (if directory exists) pattern. Which is correct. mydomain does not have a /home/. Therefore it should never rewrite, atleast according to my understanding that all rewriterules are subject to rewriteconds as logical ANDs.

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • what does it mean for MalwareBytes to find malicious registry keys but nothing else?

    - by EndangeringSpecies
    I have a machine that is obviously infected, and when I ran MalwareBytes it told me that it found some "malicious" registry keys (surprisingly enough these contained file path to currently non-existent javascript files). But, that's it. Full scan did not uncover any malicious files, or malicious hidden processes in memory. Like, maybe the (hidden?) process that for whatever reason periodically injects keystrokes (hotkeys?) into whatever currently open window. Then on another, not obviously infected, machine it found a "malware.trace" registry key but again no files or processes etc. How does this jive with people's experience with MalwareBytes? Does it usually find registry key symptoms of an infection but nothing else? Or is it a common thing to have no infection but some malicious registry keys in place anyway?

    Read the article

  • How to determine main movie DVD track before ripping via mencoder

    - by Ampp3
    Maybe there's a simple answer for this, but when looking at the files on a DVD (IFOs, VOBs,etc), is there a way to easily determine the longest/main track? I'm trying to automate the process of finding the main movie track on a DVD and am running into issues. I thought this could be done by finding the BIGGEST track (look through VTS_XX_N.VOB files, where XX is the track number, and find the track with the largest filesize (sum sizes of VOB files for that track)), but apparently that isn't correct. One DVD had track 7 as the largest track (by my method), but mencoder didn't produce the correct output with this track, but worked with track 9 instead. Am I missing something? EDIT: I've heard of the utility 'lsdvd' for getting track information, but I was hoping to avoid compiling this, and use a basic method instead (ie: what I tried above). Does anyone have any idea WHY my idea didn't work?

    Read the article

  • Renaming debian package

    - by Tabiko
    I'm trying to build a customized version of a nginx package for Debian/Ubuntu which had a different set of modules opposed to the default version. What would be the fastest way to modify the debian/ structure (and which files) if I'd want to rename the package from 'nginx' to 'my-nginx' for example? I've got the source deb package unpacked and which files I'd need to modify in nginx-1.4.5/debian/ directory (holding the control, rules.. files) have buildpackage generate my-nginx-1.4.5.deb package instead of nginx-1.4.6.deb package. I appreciate your help!

    Read the article

  • What's the difference between WSGI <app> and <module>?

    - by Leftium
    I followed these instructions to serve Python (Web2Py) via uWSGI. However, the web server returned an error: uWSGI Error Python application not found until I modified the config.xml config file from: <uwsgi> <pythonpath>/var/web2py/</pythonpath> <app mountpoint="/"> <script>wsgihandler</script> </app> </uwsgi> to: <uwsgi> <pythonpath>/var/web2py/</pythonpath> <module>wsgihandler</module> </uwsgi> What's the difference between <app> and <module>? Why did <module> work, but not <app>?

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • How to access USB stick content from VMWare running Ubuntu 10.10?

    - by JVerstry
    Hi, I am running Ubuntu 10.10 via VMWare under Windows 7. I have followed the procedure to install the USB stick. It is now connected to the host. However, I don't know how to access the content of the stick. My Google research indicates that this may be a mounting issue. I read somewhere that I should check /proc/bus/usb, but the usb directory does not exist in /proc/bus. Unfortunately, I am not a Linux expert at this. The ultimate issue I am trying to solve is the one describe here. I am trying to use vi to create ~/.vmware/config, but it is virtually impossible to use vi, since I don't have access to the arrow keys (chicken & egg problem). I have created the config file on my usb stick and want to copy it where it should be. Thanks!

    Read the article

  • Linux: Alternative to rsync? (ie, scp with resume)

    - by Joernsn
    I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow. I don't need fancy error handling, just "scp with resume", so here's my question: Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files? I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using: rsync -av --partial --progress --inplace SRC DST

    Read the article

  • How to download a url as a file?

    - by Michelle
    A website url has "hidden" some mp3 files by embedding them as shockwave files, as follows: <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The StackOverflow Method Create a new local html file with just the links eg <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The SuperUser Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter url in field. Are there any other ways?

    Read the article

  • Panic Transmit file upload

    - by 1ndivisible
    I've ditched Coda and bought Transmit. I'm a little confused by the file uploading. I have exactly the same folder structure remotely and locally, but if I right-click a file and choose Upload "SomeFileName.html" The file is always uploaded into the root of the remote site, even if the file is in a folder. If I choose to upload a file at assets/images/some_image.png I would expect it to be uploaded to the same folder on the remote server, not the root. Coda dealt with this perfectly and also told me what files had been modified and needed uploading. Transmit doesn't seem to do either of these things. So my questions are: How can I upload a file to the same path on the remote server without having to drag and drop Is there any way to have Transmit mark edited files or upload only edited files. [There is no tag for Transmit so if someone with more rep could make and add one that would be grand]

    Read the article

  • How to get filename of job in cups?

    - by Grook
    I have printed a couple of files and lpstat shows that they are completed. But the output is something like this: # lpstat -W completed -l Canon-1 root 1086464 Sat May 21 22:47:03 2011 Alerts: job-canceled-by-user queued for Canon Canon-2 root 337920 Mon May 23 20:18:02 2011 Alerts: job-canceled-by-user queued for Canon CanonWin-3 root 17408 Mon May 23 20:29:40 2011 Alerts: job-completed-successfully queued for CanonWin` How can i get names of files which has been printed? P.S. Is there is any bash-script which allows me to get names of all files which has been printed?

    Read the article

  • apache not starting in vagrant vm

    - by jimmyjambles
    I used Puphpet.com to create a Vagrant VM to be used for web development. The problem I am having is that the VM cannot start apache on boot. $ sudo /etc/init.d/apache2 start * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: apache2: Syntax error on line 36 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/authz_default.load: Cannot load /usr/lib/apache2/modules/mod_authz_default.so into server: /usr/lib/apache2/modules/mod_authz_default.so: cannot open shared object file: No such file or directory Action 'configtest' failed. The Apache error log may have more information. the system is ubuntu 12, not sure what modifications I have to make to the puppet config to fix the problem.

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • How can transfer zabbix item from different hosts and save item statistic

    - by Stepchik
    There are two server's (srv1 and srv2). Mysql server has been installed on which of them. Srv1 mysql contains database (db1). Zabbix-server get statistic throw configured agent user parameter (https://www.zabbix.com/documentation/2.0/manual/config/items/userparameters). Yesterday i has been copyed database db1 from mysql srv1 to mysql srv2. I can clone zabbix server item (https://www.zabbix.com/documentation/2.0/manual/config/items) to srv2, but lost all srv1 db1 statistic. Can you advice how keep them?

    Read the article

  • IE8/IE7/IE6/IE5 on WinXP Use The Wrong Certificate

    - by Marco Calì
    For some reason IE8/IE7/IE6/IE5 on Windows XP, instead to use the certificate that is listed on the nginx website config, is using another certificate that is used from other websites. Checking the nging config file for the website everything is fine. A confirm of this is that all the other browsers (Chrome/Firefox/Safari/IE9) are using the correct certificate. This is the nginx configuration for the app: server { listen 80; listen 443 ssl; server_name mydomain.com; ssl_certificate /root/certs/mydomain.com/mydomain.bundle.crt; ssl_certificate_key /root/certs/mydomain.com/mydoamin.key; access_log /opt/webapps/cs_at/logs/access.log; location / { add_header P3P 'CP="CAO PSA OUR"'; proxy_pass http://127.0.0.1:20004; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; } }

    Read the article

  • Apache redirect multiple domain names from https

    - by Cyril N.
    My server distribute two main websites, says : www.google.com & www.facebook.com (yeah I know :p) I want them to be distributed via https. Using Apache, I defined a vhost file in sites-available/enabled containing this : <VirtualHost *:80> ServerName google.com Redirect / https://www.google.com/ </VirtualHost> <VirtualHost *:80> ServerName facebook.com Redirect / https://www.facebook.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /srv/www/google/www/ ServerName www.google.com ServerAlias www.facebook.com <Directory ... /> # Google & Facebook points to the same directory (crazy right ?) # Next of the config </VirtualHost> <VirtualHost *:443> SSLEngine On SSLCertificateFile /path/to/google.crt SSLCertificateKeyFile /path/to/google.key DocumentRoot "/srv/www/google/www/" ServerName www.google.com <Directory .../> # Next of the config </VirtualHost> <VirtualHost *:443> SSLEngine On SSLCertificateFile /path/to/facebook.crt SSLCertificateKeyFile /path/to/facebook.key DocumentRoot "/srv/www/google/www/" ServerName www.facebook.com <Directory .../> # Next of the config </VirtualHost> If I access to https://www.google.com, the httpS works correctly If I access to https://www.facebook.com, the httpS works correctly. If I access to http://www.google.com, the http works correctly # Without https ! If I access to http://www.facebook.com, the http works correctly # Without https ! BUT : If I access to https://facebook.com, it fails saying that the SSL connection is not what expected : Google.com instead of facebook.com Based on my configuration file, I understand why, so I tried to add : <VirtualHost *:443> SSLEngine On ServerName facebook.com Redirect / https://www.facebook.com/ </VirtualHost> But then, I can't even access facebook.com nor www.facebook.com via http/https. So my question is quite simple : how can I redirect all https access to facebook.com (and eventually all sub facebooks : facebook.fr, www.facebook.fr, etc) to www.facebook.com (redirecting to www domain) in HTTPS ? Thanks for your help ! :)

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • problem with MySQL installation : template configuration file cannot be found

    - by user35389
    Trying to install MySQL onto the Windows XP machine. While going through the installation steps (in the "MySQL Server Instance Config. Wizard"), I get to a point where it the window reads: MySQL Server Instance Configuration (bold header) Choose the configuration for the server instance. Ready to execute... o Prepare configuration o Write configuration file o Start service o Apply security settings (this line is greyed out) Please press [Execute] to start the configuration. [ Back ] [ Execute ] [ Cancel ] So I press execute, and then a red X appears in the second step: Write configuration file and at the bottom, where it originally said: Please press [Execute] to start the configuration. It now says: The template configuration file cannot be found at C:\Program Files\MySQL\MySQL Server 5.0\bin\my-template.cnf I'm unsure what it means, but I canceled the config wizard and looked in the directory that had been created (C:\Program Files\MySQL\MySQL Server 5.0). There are some configuration settings files, and there are 4 folders: bin data Docs share

    Read the article

  • cygwin rsync over ssh very slow

    - by Waleed Hamra
    I have 2 machines running Windows Xp SP3. I have cygwin installed on both, version 1.7. I have rsync and ssh installed on both, and configured using default settings as per ssh-host-config and ssh-user-config programs provided. I moved the public keys into their respective locations, and basically ssh is working fine. i began an rsync operation, using: rsync -av --delete --hard-links local_dir username@other_machine:/some_dir well... on both machines, the processor is running near idle, no heavy usage. I checked IO using process explorer on both machines, and that too is at normal levels (1~2 MB/s), so I can't see where the bottlenecks are, because network performance is aweful. I'm not going over 1MB/s... when a normal file copy using windows sharing achieves some ~10 MB/s.. What could be wrong?

    Read the article

  • Setup CENTOS Centralized AUDIT and RSYSLOG server

    - by Warron.French
    Attempting to use these links: Sending audit logs to SYSLOG server or http://wiki.rsyslog.com/index.php/Centralizing_the_audit_log I have been unable to get centralized AUDIT logging to work on my ALL-CentOS network environment. I have 6 workstations dt1...dt6, and the log files are not generated at all and I cannot tell if the messages are being sent from these workstations: dt1..dt6 over to the server (srv1). I have configured the rsyslog.conf on the workstations as shown in the link: Sending audit logs to SYSLOG server, and add the additional touches for generating the logfiles into a separate directory per YEAR/MONTH/DAY (using proper syntax) and into separate HOSTNAME-based_audit.log files. Note: RSYSLOG messaging does appear to work from the workstations over to the server, but the audit logging portion is not working. I am running CentOS-6.5 with RPMs: audit-2.2-4.el6_5.x86_64, audit-libs-2.2-4.el6_5.x86_64, and rsyslog-5.8.10-8.el6.x86_64 I have gotten zero responses from wiki.rsyslog.com and really need this to work. If needed I can send files of one of my workstations and the server to aid in the process. Thanks, Warron

    Read the article

  • Is it possible to create an SFTP drop box?

    - by Jordan Reiter
    I have a Windows server with folders accessible via SFTP (server is running OpenSSH). scp is blocked. I would like to copy files from a Linux server to the Windows server. SFTP seems like a good option. Ideally I'd like something similar to an FTP drop box, so that the Linux box could just copy files directly over to the Windows box. I'm also open to any solutions to this that would allow me to copy the files while offering the least amount of hassle. The language I'd be using on the Linux box is python; not sure if that factors in or not.

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >