Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 428/914 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • probems using ssh from cron

    - by Travis
    I am attempting to automate a script that executes commands on remote machines via ssh. I have public key authentication setup between the machines using ssh-agent. The script runs fine when executed from the command prompt. I suspect my problem is that cron isn't starting the ssh-agent due to it's minimalist environment. Here is the output when I add the -v flag to ssh: debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /home/<user>/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 149 debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug1: Trying private key: /home/<user>/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic,password). How can I make this work? Thanks!

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Disable XP disk check using FAT32

    - by mike xie
    Right now I'm using Windows XP and Macintosh on my MacBook Pro via Bootcamp. Sometimes my XP would crash and when I restarted it it would have to go through disk check, although it says I can skip it by pushing a key, but this never worked for me. I did a bit of research online on how to disable disk check and found chkntfs /x c: but when I tried this out in my cmd it said the disk is FAT32 format. I tried to convert my C: drive from FAT32 to NTFS by using convert c: /FS:NTFS but when I tried this it told me to locate my C: drive. I tried to type C: and Bootcamp but couldn't really get past it. I later saw someone said to use this: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "AutoChkTimeOut"=dword:0000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager] "BootExecute"=hex(7):61,00,75,00,74,00,6f,00,63,00,68,00,65,00,63,00,6b,00,20,\ 00,61,00,75,00,74,00,6f,00,63,00,68,00,6b,00,20,00,2a,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon] "SFCScan"=dword:00000000 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\cleanuppath] @=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,\ 00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,63,00,6c,00,\ 65,00,61,00,6e,00,6d,00,67,00,72,00,2e,00,65,00,78,00,65,00,20,00,2f,00,44,\ 00,20,00,25,00,63,00,00,00 (Save it as .reg and execute it) I have just tried running it but am not really sure if it did anything (my laptop hasn't crashed yet :) ) Firstly, I am wondering if someone can tell me how to check if that script worked? Secondly, if that script didn't work, does anyone have any solution for these problems? Is there another way to disable disk check or is there another way for me to change my FAT32 to NTFS?

    Read the article

  • bond0 and xen = crash

    - by Rajat
    Bonding with xen 1 - Stop all guests. Reboot dom0 after running "chkconfig xend off" and "chkconfig xendomains off". 2 - Configure bond0 by enslaving eth0 and eth1 to it. I added the below two entries to /etc/modprobe.conf. alias bond0 bonding options bond0 mode=6,miimon=100 Content of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none Content of /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 IPADDR= NETMASK= ONBOOT=yes BOOTPROTO=static USERCTL=no Did "modprobe bond0" and "service network restart" after that. 3 - Edit /etc/xen/xend-config.sxp Change (network-script network-bridge) To (network-script 'network-bridge netdev=bond0') 4 - Start xend. "service xend start". 5 - chkconfig xend on. 6 - modprode bond0 7 - more /proc/net/bonding/bond0 8 - Create guest images as usual and bridge it to xenbr0. about config i did for my xen kernel rhel 5.3 after i reboot the host server i get in place bond0 get pbond0 and its get disconnect from network only i ping to my vm's on the host server any one have any idea why xen bond0 is acting like that or what is solutions to come out of pbond0 to bond0.

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • VBScript Capture StdOut from ShellExecute

    - by Joe
    I am trying to run the following code snippet as part of a tool to gather and log some pertinent system diagnostics. The purpose of this snippet is to gather the result of running the command: vssadmin list writers The snippet is as follows: ' Set WshShell = CreateObject("WScript.Shell") ' WScript.Echo sCurPath & "\vsswritercheck.bat" ' Set WshShellExec = WshShell.Exec("elevate.cmd cmd.exe /c " & sCurPath & "\vsswritercheck.bat") Set oShell = CreateObject("Shell.Application") oShell.ShellExecute "cmd.exe", sCurPath & "\vsswritercheck.bat", , "runas", 1 vsswriter = VSSWriterCheck Select Case oShell.Status Case WshFinished strOutput = oShell.StdOut.ReadAll Case WshFailed strOutput = oShell.StdErr.ReadAll End Select WScript.Echo strOutPut vsswriter = strOutPut With the first code snippet (commented out) I can run the command and capture stdout from the batch file. In the second code snipped, I cannot capture stdout. I need to be able to run the batch script with Elevated permissions, so I am looking for a compromise between the functionality of the two. I cannot run the entire calling script in elevated mode due to restrictions from other pieces of functionality. I am looking for any ideas on how to add this output to my log as I am running out of options that are within the scope of basic scripts.

    Read the article

  • Postfix unable to find local server

    - by Andrew
    I'm working with postfix on fedora 9 and I'm attempting to make some changes to a system setup by my predecessor. Currently the postfix server on [mail.ourdomain.com] is setup to forward mail sent to two addresses to another server for processing. The other server [www01.ourdomain.com] receives the email and sends it to a PHP script to be processed. Then that PHP script generates and sends a response to the user who sent the original email. We're adding more web servers to the system and as a result we've decided to move these processing scripts to our admin [admin.ourdomain.com] server to make them easier to keep track of. I've already setup and tested the processing scripts on [admin.ourdomain.com], and on the mail server doing the forwarding [mail.ourdomain.com] I added [admin.ourdomain.com] to /etc/hosts and also added another, aside from the one for [www01.ourdomain.com], entry to /etc/postfix/transport for [admin.ourdomain.com]. I also restarted postfix as well. I've tested the communication from [mail.ourdomain.com] to [admin.ourdomain.com] using telnet and the [admin.ourdomain.com] domain and everything runs correctly. But as soon as I change the forward address and attempt to send an email to the mail server I get a bounce message stating "Host or domain name not found. Name service error for name=admin.ourdomain.com type=A: Host not found". If I change the forward settings back to [www01.ourdomain.com] then everything works fine. Is there some setting I'm missing in Postfix? The server itself and telnet work fine it just seems to be postfix that's not able to discover the location of [admin.ourdomain.com].

    Read the article

  • Website memory problem

    - by Toktik
    I have CentOS 5 installed on my server. I'm in VPS server. I have site where I have constant online ~150. First look on site looks OK. But when I go through links, sometimes I receive Out of memory PHP error. It looks like this Fatal error: Out of memory (allocated 36962304) (tried to allocate 7680 bytes) in /home/host/public_html/sites/all/modules/cck/modules/fieldgroup/fieldgroup.install on line 100 And always, not allocated memory is very small. In average I have 30% CPU load, 25% RAM load. So I think here is not a physical memory problem. My PHP memory limit was set to 1500MB. My apache error log looks like this [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:49:00 2010] [error] [client 91.204.190.5] File does not exist: /home/host/public_html/favicon.ico Past I have not met with this on my server and the problem appeared itself. Besides this I'm receiving some server errors on mail. cpsrvd failed @ Fri Sep 24 16:45:20 2010. A restart was attempted automagically. Service Check Method: [tcp connect] Failure Reason: Unable to connect to port 2086 Same for tailwatchd. Support tried, and can't help me...

    Read the article

  • Apache crashes every 5min

    - by Simon
    I'm relatively new to server issues, having a site of mine that I started early in the year grow beyond my capabilities of managing it. I need help. I recently moved out of my shared hosting environment onto a dedicated virtual server from Mediatemple. Each week, I run a script that fetches data from my DB, fetches data from last.fm's API and then tweets information to Twitter. My server uses Virtuozzo and when the script runs, Apache crashes every 5min. I checked and saw that the 'kmemsize' parameter reaches its cap (its 13mb). I realise my problem. The MySQL process stays open for long while Apache needs to handle lots of incoming links (about 200 000 pageviews for that day according to my previous host's AWSTATS). Yes, I'm quite unexperienced in this, and I'm clearly killing the server with too many incoming links while it has to manage the updating of the DB. So that is the precedent: I want a few answers. 1) Why did my shared hosting environment not crash apache every 5min? It ran fine, the site only slowed a lot. Clearly, it must be the virtual container and the kmemsize limit? 2) Where do I go from here? Would a physical server (not a virtual container) encounter the same problems? I sent a support request to Mediatemple as well. I need all the help I can get.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • nginx: SSI working on Apache backend, but not on gunicorn backend

    - by j0nes
    I have nginx in front of an Apache server and a gunicorn server for different parts of my website. I am using the SSI module in nginx to display a snippet in every page. The websites include a snippet in this form: For static pages served by nginx everything is working fine, the same goes for the Apache-generated pages - the SSI include is evaluated and the snippet is filled. However for requests to my gunicorn backend running a Python app in Django, the SSI include does not get evaluated. Here is the relevant part of the nginx config: location /cgi-bin/script.pl { ssi on; proxy_pass http://default_backend/cgi-bin/script.pl; include sites-available/aspects/proxy-default.conf; } location /directory/ { ssi on; limit_req zone=directory nodelay burst=3; proxy_pass http://django_backend/directory/; include sites-available/aspects/proxy-default.conf; } Backends: upstream django_backend { server dynamic.mydomain.com:8000 max_fails=5 fail_timeout=10s; } upstream default_backend { server dynamic.mydomain.com:80; server dynamic2.mydomain.com:80; } proxy_default.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; What is the cause for this behaviour? How can I get SSI includes working for my pages generated on gunicorn? How can I debug this further?

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • How can I override mod-php5's .php mapping to php4-cgi per VirtualHost or Directory?

    - by geocoo
    I am running Debian Linux with apache2 and libapache2-mod-php5 5.3.3-7. I have one VirtualHost which requires php4. So I researched and compiled php4-cgi. However, I cannot seem to: Override mod-php5's mapping of .php in that vhost (or even globally, without disabling php completley). Even find where that mapping is made, in hope of disabling it and enabling mod-php5 or php4-cgi per vhost. This is my php4-cgi mapping (Inside the one php4 vhost): ScriptAlias /php4 /usr/local/php4/bin <Directory /usr/local/php4/bin> Options +ExecCGI +FollowSymLinks </Directory> <Directory /www/test> AddHandler php4-cgi-script .php Action php4-cgi-script /php4/php Options +ExecCGI </Directory> This does not work, mod-php5 still runs all .php files in that vhost/directory. If I change the file extension in the AddHandler above from .php to .php4, then .php4 files do run php4-cgi as expected, but I can't change all the files in the app to .php4. I thought maybe I could disable the mod-php5's mapping in my vhost or directory, then do my cgi-config (as above) but many combinations of these in different contexts did not work: RemoveHandler .php RemoveType .php php_flag engine off (this seems to even disable my php4-cgi so that wont work) The only other place I can find any mapping is in /etc/mime.types, but commenting out the relevant lines and restarting apache2 does not affect mod-php5's .php mapping. I have searched as much as I can, it is now a mystery to me. Any help or direction would be greatly appreciated.

    Read the article

  • Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything.

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • How can Django/WSGI and PHP share / on Apache?

    - by Mark Snidovich
    I have a server running an established PHP site, as well as some Django apps. Currently, a VirtualHost set up for PHP listens on port 80, and requests to certain directories are proxied to a VirtualHost set up for Django with WSGI. I'd like to change it so Django handles anything not existing as a PHP script or static file. For example, / -parsed by PHP as index.php /page.php -parsed as PHP normally /images/border.jpg -served as a static file /johnfreep -handled by Django (interpreted by urls.py) /pages/john -handled by Django /(anything else) - handled by Django I have a few ideas. It seems the options are 'php first' or 'wsgi first'. set up Django on port 80, and set Apache to skip all the known PHP, CSS or image files. Maybe using SetHandler? Anything else goes to Django to be parsed by urls.py. Set up a script referring everything to Django as a 404 handler on PHP. So, if a file is not found for a name, it sends the request path to a VirtualHost running Django to be parsed.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >