Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 336/1113 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • HP D2D 4312 Bacula configuration

    - by krisdigitx
    I have configured 5 libraries on the HP D2D system Discovery on the Bacula server shows only the last library and not all libraries. Why? [root@server bacula]# iscsiadm --mode discovery --type sendtargets --portal 10.66.59.114 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dca5e.library12.drive1 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dcaf2.library12.robotics I can query it fine using... [root@server bacula]# mtx -f /dev/sg2 inquiry Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'Ultrium 5-SCSI ' Revision: 'ED51' Attached Changer API: No [root@bray bacula]# mtx -f /dev/sg3 inquiry Product Type: Medium Changer Vendor ID: 'HP ' Product ID: 'MSL G3 Series ' Revision: 'EL41' Attached Changer API: No [root@server bacula]# mtx -f /dev/sg3 status Storage Changer /dev/sg3:1 Drives, 97 Slots ( 1 Import/Export ) Data Transfer Element 0:Empty Storage Element 1:Full :VolumeTag=50507F82 Storage Element 2:Full :VolumeTag=50507F83 Storage Element 3:Full :VolumeTag=50507F84 Storage Element 4:Full :VolumeTag=50507F85 Storage Element 5:Full :VolumeTag=50507F86 Storage Element 6:Full :VolumeTag=50507F87 Storage Element 7:Full :VolumeTag=50507F88 Does anyone have any good documentation for implementing Bacula with an HP D2D tape drive for server backups, and how to allocate libraries?

    Read the article

  • Rsync from godaddy to OS X

    - by Ola
    I would like to use rsync to backup my website to my local computer (OS X). I started of with this guide and got pretty far. I use the following rsync-line: rsync -PzrlptgD --del --delete-excluded -r --rsync-path=~/bin/rsync user@server:~/ /local/backup/folder/ I wanted to use the -a option (same as rlptgoD) but it crashes as soon as I use the -o flag. receiving file list ... rsync: connection unexpectedly closed (8 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] If I skip the --owner flag it copies the files but I'm not really sure what difference it makes (I've tried to read up on it but found nothing) Should I just skip using the --owner flag? Or have I done any other mistake? Thanks in advance //OL

    Read the article

  • SATA drives or chipset throwing DRDY ERR and ICRC ABRT

    - by Matt
    I have an SD-VIA-1A2S PCI card with 2 sata ports (and one ATA-133 that isn't used). Two new Western Digital Caviar Green drives (WD10EARS 1TB) throw repeated errors in kern.log (removed date/time/host info for brevity): [ 7.376475] ata2.00: exception Emask 0x12 SAct 0x0 SErr 0x1000500 action 0x6 [ 7.376480] ata2.00: BMDMA stat 0x5 [ 7.376483] ata2: SError: { UnrecovData Proto TrStaTrns } [ 7.376489] ata2.00: cmd c8/00:40:20:00:00/00:00:00:00:00/e0 tag 0 dma 32768 in [ 7.376490] res 51/84:2f:20:00:00/00:00:00:00:00/e0 Emask 0x12 (ATA bus error) [ 7.376493] ata2.00: status: { DRDY ERR } [ 7.376495] ata2.00: error: { ICRC ABRT } [ 7.376504] ata2: hard resetting link I'm using Ubuntu 9.04 - 2.6.28-18-generic, though I have tried live cds of Ubuntu 9.10, Fedora 12 and OpenSUSE 11.2 - all running various 2.6.31 kernels - and all received the same error. Based on testing these drives and this card in two other machines and combos of connecting the drives directly to the motherboard or the add-in card, I'm relatively convinced that it's the VIA chipset that is the problem. Another computer that also has an onboard VIA SATA chipset (like the add-in card) produces the same errors when the drives are directly on that motherboard. I have been able to verify that the drives are perfectly good, and I tried everything I can think of in terms of swapping cables, psu isn't overloaded, etc. The error happens on boot once or twice, after using fdisk on the drive once or twice, and constantly when attempting to sync a new mdadm raid 1 array created on the two drives. Any thoughts on where to go from here - driver/kernel wise? I'm completely open to buying a new PCI add-in card if someone can recommend one with 2 internal sata ports that works well in Debian/Ubuntu. Thanks!

    Read the article

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Is anyone using KVM in production?

    - by Andy Shellam
    I've been trying to set up a pair of servers utilising KVM on Ubuntu 9.10 to host 8 virtual machines between them and ended up with various issues from the VMs freezing, to not powering on. I had one virtual server set up and running and was setting up a second, when any operation involving OpenSSL would cause the VM to lock up in a weird way - all network traffic would cease, it wouldn't process logins on the console, but it wasn't taking any CPU time off the host. The first virtual server was identical and worked perfectly. Another VM I tried to setup had installed Ubuntu fine then refused to reboot, throwing kernel exceptions to do with XFS. I've now installed Citrix XenServer 5.5 on both hosts, and am now setting up my third VM with absolutely no issues. I also had the same experience when I tried VMware, but I preferred Xen as it appears to give more features on the free license. My question is am I just unlucky with KVM, or is KVM as unstable as it appears? Are you using, or planning on using, KVM in production, and how successful have you been?

    Read the article

  • How to Install vaio-control-center from source?

    - by KasiyA
    I have a problem about turning off keyboard backlight that I asked here with no useful answers. After searching on internet I find a package for vaio control center and downloaded it from here, I don't know how to install it. This is the output of trying one solution: USER@XXXXpc:~/vaio-control-center-0.1$ ls compile Makefile run vaio-control-center vcc COPYING moc_main_window.cpp ui_main_window.h vaio-control-center.pro USER@XXXXpc:~/vaio-control-center-0.1$ ./compile make: *** No rule to make target `/usr/share/qt/mkspecs/linux-g++-64/qmake.conf', needed by `Makefile'. Stop. USER@XXXXpc:~/vaio-control-center-0.1$ ./run ./run: 3: ./run: ./vaio-control-center: not found USER@XXXXpc:~/vaio-control-center-0.1$ Updated I tried also with @pandya's suggestion from here. and the output is as follows: root@user-pc:/# cd /home/user/vaio-f11-linux.control-center root@user-pc:/home/user/vaio-f11-linux.control-center# ls compile COPYING resource.qrc run sony-acpid vaio-control-center.pro vcc root@user-pc:/home/user/vaio-f11-linux.control-center# ./compile -su: ./compile: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./compile sudo: ./compile: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./compile root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# ./run -su: ./run: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./run sudo: ./run: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./run root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# and after running that I didn't see any affect on keyboard backlight.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • grep is inconsistently defaulting to grep -P?

    - by Sammitch
    I have a script that does some housekeeping that works perfectly well when invoked from an interactive shell, but did nothing when invoked by cron. To troubleshoot this I started a shell with a 'blank' environment with the command: env -i /bin/bash --noprofile --norc Using this blank env I've dug into my script and found that the following grep will not match any files: grep -il "^ws_status\s*=\s*[\"']remove[\"']$" However, when run from an interactive shell the command will return the filenames of the matching files. As a note, the expression is matching lines like: WS_STATUS = "remove" Through trial-and-error I discovered that adding -P to the options [Perl regex] the command started working normally in the 'blank' shell. However, I have no idea why my login shell appears to be defaulted to grep -P. There is only one grep binary, /bin/grep There are no aliases defined for grep=pgrep or grep="grep -P" There is no env variable GREP_OPTIONS defined. What's the deal here? Note: OS is RHEL v5.10, Bash is v3.2.25, grep is v2.5.1

    Read the article

  • reset file permissions?

    - by acidzombie24
    In my /var/www folder i have permission 2750 with the owner being root (unless i change it by hand) and the group being www-data. I mv a folder into /var/www and i'd like to reset the permissions so everything is 2750 and for the group to be www-data, is it possible to do it in one command? or do i need to do multiple cmds? (its two commands, 3 if i want the same owner but it be nice to do it with one for this folder)

    Read the article

  • Ubuntu Server - Power failure leads to boot failure

    - by Ali Nadalizadeh
    I have installed Ubuntu Server 10.04.1 LTS on an ext4 partition. Whenever my system looses power suddenly, It doesn't boot into the normal procedure to fix the problems automatically, but switches to the busy box shell (where it says Kernel Panic : No init found) So I guess kernel is refusing to mount the filesystem when it is not clean, since when I boot up using a Live CD and fsck it, it boots up correctly. How can I force kernel to mount the filesystem, even if it is not clean ?, so that automated fsck on system startup fixes the problems... (or it's a grub problem ?) K-V : 2.6.32-26-generic-pae #48-Ubuntu SMP

    Read the article

  • Change EXT3 stride and stripe-width settings post-install on CentOS 5.3

    - by Justin Ellison
    Is there a way to change the stride and stripe-width options on an ext3 file system under CentOS/RHEL 5.3? There's no way to specify it via anaconda during installation that I saw, and while I see the -E option to tune2fs available under Ubuntu, I don't see it in the manpage on CentOS. I did try to use the -E flag on CentOS and it rejects the flag as unknown if I try to use it. Anyone have any way to do this short of reinstallation?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • Apache error with suEXEC only

    - by michaelc
    When I enable suEXEC by following the tutorial here, I am able to get PHP to run over Apache in cgi mode, but when I start trying to use suEXEC I get a 403, and the following error appears in the error log "client denied by server configuration". The suEXEC log is empty. How can I get this working? My ultimate goal is to run fastcgi with suexec, and this error has stopped me at every turn. The relevant portion of httpd.conf: ScriptAlias /php5-cgi /usr/bin/php-cgi Action php5-cgi /php5-cgi AddHandler php5-cgi .php <Directory /usr/bin> Order allow,deny Allow from all </Directory> <VirtualHost *:80> ServerName skylords.com ServerAlias www.skylords.com en.skylords.com lt.skylords.com nl.skylords.com DocumentRoot /srv/http/htdocs SuexecUserGroup skylords skylords AddHandler php5-cgi .php ScriptAlias /php5-cgi /var/http/htdocs/cgi-bin/php-cgi ErrorDocument 404 /srv/http/htdocs ErrorLog /srv/http/logs/apache_error.log <Directory "/srv/http/htdocs"> AllowOverride All Order allow,deny Allow from all Options Indexes +FollowSymLinks +ExecCGI </Directory> </VirtualHost>

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • 'skb rides the rocket' on Xen VM

    - by Kye
    I've just set up Ubuntu 13.10 server as a VM on my Ubuntu/Xen server, and I'm getting these weird lines in my syslog. Nov 12 10:26:32 human kernel: [130782.315333] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.362405] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.408458] xennet: skb rides the rocket: 19 slots Nov 12 10:26:32 human kernel: [130782.490260] xennet: skb rides the rocket: 20 slots Nov 12 10:26:32 human kernel: [130782.541931] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.226635] xennet: skb rides the rocket: 19 slots Nov 12 10:26:35 human kernel: [130785.261026] xennet: skb rides the rocket: 21 slots Nov 12 10:26:35 human kernel: [130785.469306] xennet: skb rides the rocket: 19 slots Nov 12 10:26:36 human kernel: [130786.552730] xennet: skb rides the rocket: 21 slots Nov 12 10:26:38 human kernel: [130788.212747] xennet: skb rides the rocket: 20 slots Nov 12 10:26:38 human kernel: [130788.257544] xennet: skb rides the rocket: 19 slots Nov 12 10:26:38 human kernel: [130788.903841] xennet: skb rides the rocket: 19 slots Unsure of what they mean, and Google has nothing meaningful. Any help is appreciated.

    Read the article

  • CentOS RPM database trashed, "rpm --rebuilddb" won't fix, can I recover using /var/lib/rpm/ from a 2

    - by user18330
    My RPM database is shot, neither rpm or yum works. Supposedly "rpm --rebuilddb" will fix it, but it doesn't in my case. This server has three sister servers that are basically identical, and have working RPM databases. I tried copying /var/lib/rpm/ from working server to the sick one, but that didn't fix it. Any ideas of how I can use good server's rpm to fix the sick one?

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >