Search Results

Search found 13518 results on 541 pages for 'daniel root'.

Page 433/541 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

  • Apache + mod_fcgid + perl = error 500

    - by f-aminov
    Hi guys! I'm trying to setup Apache2.2 with mod_fcgid and libapache2-mod-perl2 with no luck. I've created a fcgi-bin directory in the root directory of my website and put there a test.fcgi file with the following content: #!/usr/bin/perl use CGI; print "This is test.fcgi!\n"; While trying to access it via http://www.website.dom/fcgi-bin/test.fcgi I get error 500 (Internal Server Error). Here is my vhost config: <VirtualHost 95.131.29.226:8080> ServerName website.com DocumentRoot /var/www/data/website.com SuexecUserGroup user group ServerAlias www.website.com AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml <Directory "/var/www/data/website.com/fcgi-bin/"> Options +ExecCGI Allow from all Order allow,deny AddHandler fcgid-script .fcgi </Directory> </VirtualHost> fcgid.conf: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi SocketPath /var/lib/apache2/fcgid/sock IdleTimeout 3600 ProcessLifeTime 7200 MaxProcessCount 8 DefaultMaxClassProcessCount 2 IPCConnectTimeout 8 IPCCommTimeout 60 </IfModule> SuExec log: [2010-04-06 03:02:47]: uid: (500/equ) gid: (502/equ) cmd: test.fcgi Apache error log: test! test! [Tue Apr 06 03:02:51 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26267) exit(communication error), terminated by calling exit(), return code: 0 [Tue Apr 06 03:02:53 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26261) exit(server exited), terminated by calling exit(), return code: 0 I've no clue why I'm getting error 500, but when I'm trying to access this file using console ($ perl /var/www/data/website.com/fcgin-bin/test.fcgi) everthing works fine without any errors... Any suggestions on how to solve this problem would be greatly appreciated. Thank you!

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • System hangs while rebooting on Debian...

    - by Usman
    Hi, I have Debian (Kernel 2.6.26-2-686) installed on two computers. On one of them it reboots quite finely but I am having following problem with rebooting Debian on my second computer. When i type reboot at the Linux prompt, following messages appear and system hangs up after saying "Restarting System": Broadcast message from root@myname (tty1) (Sun Jan 17 11:23:26 2010) The system is going down for reboot NOW! INIT: Switching to runlevel: 6 INIT: Sending processes the TERM signal Saving system clock Stopping enhanced syslog: rsyslogd. Asking all remaining processes to terminate...done. Deconfiguring network interfaces...done. Cleaning up ifupdown.... Deactivating swap...done. [ 31.789103] Restarting System. _ Normally when the sytem is busy "" sign blinks but "" at the last line above does not blink which shows, the system hanged up. I tried all keys but the screen is still frozen at the same point. The difference that I noted between my two computers is that I don't have ACPI support in the BIOS of the system which is giving me this error whereas the BIOS of my first computer do have ACPI support on which Debain do not give this restart-hanging problem. I have also disabled running the acpid script by running update-rc.d -f acpid remove but the problem still persists on the second computer. Any ideas to solve or get around this problem?

    Read the article

  • centos 6 debuginfo repository does not have httpd debug version available

    - by Zippy Zeppoli
    I am trying to get the debug version of httpd so I can use it in conjunction with gdb. I am having a hard time getting them, and they don't seem to be in the standard epel-debuginfo repository. What should I do? > [root@buildbox-rhel6 ~]# debuginfo-install httpd Loaded plugins: fastestmirror, presto enabling epel-debuginfo Loading mirror speeds from cached hostfile epel-debuginfo/metalink | 8.3 kB 00:00 * base: mirrors.cicku.me * epel: mirrors.kernel.org * epel-debuginfo: mirrors.kernel.org * extras: mirrors.arpnetworks.com * updates: linux.mirrors.es.net epel-debuginfo | 3.1 kB 00:00 epel-debuginfo/primary_db | 487 kB 00:01 Checking for new repos for mirrors Could not find debuginfo for main pkg: httpd-2.2.15-15.el6.centos.1.x86_64 Could not find debuginfo pkg for dependency package apr-1.3.9-5.el6_2.x86_64 Could not find debuginfo pkg for dependency package apr-util-1.3.9-3.el6_0.1.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package db4-4.7.25-17.el6.x86_64 Could not find debuginfo pkg for dependency package expat-2.0.1-11.el6_2.x86_64 Could not find debuginfo pkg for dependency package openldap-2.4.23-26.el6_3.2.x86_64 Could not find debuginfo pkg for dependency package openldap-2.4.23-26.el6_3.2.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package pcre-7.8-4.el6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package libselinux-2.0.94-5.3.el6.x86_64 Could not find debuginfo pkg for dependency package zlib-1.2.3-27.el6.x86_64 No debuginfo packages available to install

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • NIS: which mechanism hides shadow.byname for unpriviledged users?

    - by Mark Salzer
    On some Linux box (SLES 11.1) which is a NIS client I can do as root: ypcat shadow.byname and get output, i.e. some lines with the encrypted passwords, amongst other information. On the same Linux box, if I run the same command as unpriviledged user, I get No such map shadow.byname. Reason: No such map in server's domain Now I am surprised. My good old knowlege says that shadow passwords in NIS are absurd because there is no access control or authentication in the protocol and thus every (unpriviledged) user can access the shadow map and thereby obtain the encrypted passwords. Obviously we have a different picture here. Unfortunately I don't have access to the NIS server to figure out what is happening. My only guess is that the NIS master gives the map only to clients conection from a priviledged port (1024), but this is only an uneducated guess. What mechanisms are there in current NIS implementations to lead to a behavior like the above? How "secure" are they? Can the be circumvented easily? Or are shadow passwords in NIS as secure as the good old shadow files?

    Read the article

  • One nginx rules for lots of subdomain

    - by komase
    I have lots of subdomain in a server. Every subdomain has its own Drupal boost rules, like in below codes: server { server_name subdomain1.website.com; location / { root /var/www/html/subdomain/subdomain1.website.com; index index.php; set $boost ""; set $boost_query "_"; if ( $request_method = GET ) { set $boost G; } if ($http_cookie !~ "DRUPAL_UID") { set $boost "${boost}D"; } if ($query_string = "") { set $boost "${boost}Q"; } if ( -f $document_root/cache/normal/$host$request_uri$boost_query.html ) { set $boost "${boost}F"; } if ($boost = GDQF){ rewrite ^.*$ /cache/normal/$host/$request_uri$boost_query.html break; } if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/subdomain/subdomain1.website.com$fastcgi_script_name; include fastcgi_params; } } I adding all subdomain rules manually from time to time. The size of ngin.conf has become too big. So, I need one nginx rules which do: subdomain1.website.com pointing to /var/www/html/subdomain/subdomain1.website.com subdomain2.website.com pointing to /var/www/html/subdomain/subdomain2.website.com subdomain3.website.com pointing to /var/www/html/subdomain/subdomain3.website.com ...and so on (So that no more adding rules for subdomain .website.com I need in the future.)

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • Running gdb on Ubuntu 9.10 Apache2 Install

    - by AJ
    Hi all, I am trying to run gdb to debug my Ubuntu 9.10 Apache2 install and having a couple of problems: It seems like the package installed by Ubuntu for Apache2 does not include debugging symbols; is there a different version of the package I should be using for developing/debugging? When I try to run gdb, I get an error that looks to be caused by some missing environment variable. Are there additional options I should pass to "run" to get this to work? Here is the output of the debugger session: root@aj-ubuntu:/usr/sbin# gdb apache2 GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/apache2...(no debugging symbols found)...done. (gdb) run -X Starting program: /usr/sbin/apache2 -X [Thread debugging using libthread_db enabled] apache2: bad user name ${APACHE_RUN_USER} Program exited with code 01. (gdb) Thanks in advance, -aj

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything. I am already having cwRsync installed on my widows and running it in daemon mode using --damemon option. And I am able to login using ssh from windows machine to linux machine. When I issue bellow command, it just blocks for 120 seconds (timeout I specified in command) and exits saying there is timeout. rsync -e "ssh -l piyush" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" After starting rsync on widows, I checked, rsyc is running. And widows firewall setting are set to minimal, and on Linux machine stopped iptables service so that port 873 (default rsync port) is not blocked. What can be the possible reason that Linux machine is not able to connect to rsync-daemon on windows machine?

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Kickstart installation from USB -- Kickstart location

    - by dooffas
    After managing to get a Fedora ISO to rebuild successfully (for a USB stick) after adding a kickstart file (http://serverfault.com/questions/548405/), I now have an issue with locating the kickstart file on the USB media. When this is done from a CDROM you can simply kickckstart by adding this parameter to boot: linux ks=cdrom This will kickstart (providing the kickstart file is named ks.cfg and is in the root of the disk). Now, obviously this will be different for the USB drive, so from my research, I assumed that this line would do the job: linux ks=hd:sdb1:/ks.cfg Evidently this does not work. I get an error informing me this drive is already mounted and cannot be remounted. EDIT: Actual error message: mount: /dev/sdb1 is already mounted or /run/install/tmpmnt0 busy Warning: Can't get kickstart from /dev/sdb1:/ks.cfg To test that the syntax was correct I placed the kickstart file on another USB stick and loaded the same command to grab ks.cfg from the new location: linux ks=hd:sdc1:/ks.cfg This does work (providing USB sticks are mounted in order, boot - sdb1, kickstart - sdc1). The install will kickstart and complete the install with no issue. Obviously having to use 2 pen drives is somewhat frustrating and unreliable. Is there a way around this?

    Read the article

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • nginx starts up before apache

    - by paullb
    I've been fumbling through setting up redmine on a unbuntu (12.04) box and somewhere along the line NginX got set up and now apache no longer loads because nginx has already grabbed the port. I tried removing NginX with the below command but that didn't seem to make any difference. When I restarted the server and pointed my web browser I still got the "Welcome to NginX" message sudo apt-get purge nginx I have confirmed that NginX is gone because when I run the above now I get as an output Package nginx is not installed, so not removed Yet everytime I start the machine it is running again. I noticed the following for the running processes (if that is helpful) root 923 0.0 0.0 76784 1280 ? Ss 03:00 0:00 nginx: master process /usr/sbin/nginx www-data 925 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 926 0.0 0.1 77092 2204 ? S 03:00 0:00 nginx: worker process www-data 927 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process www-data 928 0.0 0.0 77092 1704 ? S 03:00 0:00 nginx: worker process Any advice for bringing back apache2 as the "default" (for lack of a better term) web server?

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >