Search Results

Search found 4214 results on 169 pages for 'binary serializer'.

Page 150/169 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there?

    - by jasonspiro
    A client of my IT-consulting service owns a web-development shop. He's been having problems with a Slackware 12.0 server running MySQL 5.0.67. The machine was set up by the client's sysadmin, who left on bad terms. My client no longer employs a sysadmin. As far as I can tell, the only copy of MySQL that's installed is the one described in /var/log/packages/mysql-5.0.67-i486-1: PACKAGE NAME: mysql-5.0.67-i486-1 COMPRESSED PACKAGE SIZE: 16828 K UNCOMPRESSED PACKAGE SIZE: 33840 K PACKAGE LOCATION: /var/slapt-get/archives/./slackware/ap/mysql-5.0.67-i486-1.tgz PACKAGE DESCRIPTION: mysql: mysql (SQL-based relational database server) mysql: mysql: MySQL is a fast, multi-threaded, multi-user, and robust SQL mysql: (Structured Query Language) database server. It comes with a nice API mysql: which makes it easy to integrate into other applications. mysql: mysql: The home page for MySQL is http://www.mysql.com/ mysql: mysql: mysql: mysql: FILE LIST: ./ var/ var/lib/ var/lib/mysql/ var/run/ var/run/mysql/ install/ install/doinst.sh install/slack-desc usr/ usr/include/ usr/include/mysql/ usr/include/mysql/my_alloc.h usr/include/mysql/sql_common.h usr/include/mysql/my_dbug.h usr/include/mysql/errmsg.h usr/include/mysql/my_pthread.h usr/include/mysql/my_list.h usr/include/mysql/mysql.h usr/include/mysql/sslopt-vars.h usr/include/mysql/my_config.h usr/include/mysql/mysql_com.h usr/include/mysql/m_string.h usr/include/mysql/sslopt-case.h usr/include/mysql/my_xml.h usr/include/mysql/sql_state.h usr/include/mysql/my_global.h usr/include/mysql/my_sys.h usr/include/mysql/mysqld_ername.h usr/include/mysql/mysqld_error.h usr/include/mysql/sslopt-longopts.h usr/include/mysql/keycache.h usr/include/mysql/my_net.h usr/include/mysql/mysql_version.h usr/include/mysql/my_no_pthread.h usr/include/mysql/decimal.h usr/include/mysql/readline.h usr/include/mysql/my_attribute.h usr/include/mysql/typelib.h usr/include/mysql/my_dir.h usr/include/mysql/raid.h usr/include/mysql/m_ctype.h usr/include/mysql/mysql_embed.h usr/include/mysql/mysql_time.h usr/include/mysql/my_getopt.h usr/lib/ usr/lib/mysql/ usr/lib/mysql/libmysqlclient_r.so.15.0.0 usr/lib/mysql/libmysqlclient_r.la usr/lib/mysql/libmyisammrg.a usr/lib/mysql/libmystrings.a usr/lib/mysql/libmyisam.a usr/lib/mysql/libmysqlclient.so.15.0.0 usr/lib/mysql/libmysqlclient_r.a usr/lib/mysql/libmysqlclient.a usr/lib/mysql/libheap.a usr/lib/mysql/libvio.a usr/lib/mysql/libmysqlclient.la usr/lib/mysql/libmysys.a usr/lib/mysql/libdbug.a usr/bin/ usr/bin/comp_err usr/bin/my_print_defaults usr/bin/resolve_stack_dump usr/bin/msql2mysql usr/bin/mysqltestmanager-pwgen usr/bin/myisampack usr/bin/replace usr/bin/mysqld_multi usr/bin/mysqlaccess usr/bin/mysql_install_db usr/bin/innochecksum usr/bin/myisam_ftdump usr/bin/mysqlcheck usr/bin/mysqltest usr/bin/mysql_upgrade_shell usr/bin/mysql_secure_installation usr/bin/mysql_fix_extensions usr/bin/mysqld_safe usr/bin/mysql_explain_log usr/bin/mysqlimport usr/bin/myisamlog usr/bin/mysql_tzinfo_to_sql usr/bin/mysql_upgrade usr/bin/mysqltestmanager usr/bin/mysql_fix_privilege_tables usr/bin/mysql_find_rows usr/bin/mysql_convert_table_format usr/bin/mysqltestmanagerc usr/bin/mysqlhotcopy usr/bin/mysqldump usr/bin/mysqlshow usr/bin/mysqlbug usr/bin/mysql_config usr/bin/mysqldumpslow usr/bin/mysql_waitpid usr/bin/mysqlbinlog usr/bin/mysql_client_test usr/bin/perror usr/bin/mysql usr/bin/myisamchk usr/bin/mysql_setpermission usr/bin/mysqladmin usr/bin/mysql_zap usr/bin/mysql_tableinfo usr/bin/resolveip usr/share/ usr/share/mysql/ usr/share/mysql/errmsg.txt usr/share/mysql/swedish/ usr/share/mysql/swedish/errmsg.sys usr/share/mysql/mysql_system_tables_data.sql usr/share/mysql/mysql.server usr/share/mysql/hungarian/ usr/share/mysql/hungarian/errmsg.sys usr/share/mysql/norwegian/ usr/share/mysql/norwegian/errmsg.sys usr/share/mysql/slovak/ usr/share/mysql/slovak/errmsg.sys usr/share/mysql/spanish/ usr/share/mysql/spanish/errmsg.sys usr/share/mysql/polish/ usr/share/mysql/polish/errmsg.sys usr/share/mysql/ukrainian/ usr/share/mysql/ukrainian/errmsg.sys usr/share/mysql/danish/ usr/share/mysql/danish/errmsg.sys usr/share/mysql/romanian/ usr/share/mysql/romanian/errmsg.sys usr/share/mysql/english/ usr/share/mysql/english/errmsg.sys usr/share/mysql/charsets/ usr/share/mysql/charsets/latin2.xml usr/share/mysql/charsets/greek.xml usr/share/mysql/charsets/koi8r.xml usr/share/mysql/charsets/latin1.xml usr/share/mysql/charsets/cp866.xml usr/share/mysql/charsets/geostd8.xml usr/share/mysql/charsets/cp1250.xml usr/share/mysql/charsets/koi8u.xml usr/share/mysql/charsets/cp852.xml usr/share/mysql/charsets/hebrew.xml usr/share/mysql/charsets/latin7.xml usr/share/mysql/charsets/README usr/share/mysql/charsets/ascii.xml usr/share/mysql/charsets/cp1251.xml usr/share/mysql/charsets/macce.xml usr/share/mysql/charsets/latin5.xml usr/share/mysql/charsets/Index.xml usr/share/mysql/charsets/macroman.xml usr/share/mysql/charsets/cp1256.xml usr/share/mysql/charsets/keybcs2.xml usr/share/mysql/charsets/swe7.xml usr/share/mysql/charsets/armscii8.xml usr/share/mysql/charsets/dec8.xml usr/share/mysql/charsets/cp1257.xml usr/share/mysql/charsets/hp8.xml usr/share/mysql/charsets/cp850.xml usr/share/mysql/korean/ usr/share/mysql/korean/errmsg.sys usr/share/mysql/german/ usr/share/mysql/german/errmsg.sys usr/share/mysql/mi_test_all.res usr/share/mysql/greek/ usr/share/mysql/greek/errmsg.sys usr/share/mysql/french/ usr/share/mysql/french/errmsg.sys usr/share/mysql/mysql_fix_privilege_tables.sql usr/share/mysql/dutch/ usr/share/mysql/dutch/errmsg.sys usr/share/mysql/serbian/ usr/share/mysql/serbian/errmsg.sys usr/share/mysql/mysql_system_tables.sql usr/share/mysql/my-huge.cnf usr/share/mysql/portuguese/ usr/share/mysql/portuguese/errmsg.sys usr/share/mysql/japanese/ usr/share/mysql/japanese/errmsg.sys usr/share/mysql/mysql_test_data_timezone.sql usr/share/mysql/russian/ usr/share/mysql/russian/errmsg.sys usr/share/mysql/czech/ usr/share/mysql/czech/errmsg.sys usr/share/mysql/fill_help_tables.sql usr/share/mysql/estonian/ usr/share/mysql/estonian/errmsg.sys usr/share/mysql/my-medium.cnf usr/share/mysql/norwegian-ny/ usr/share/mysql/norwegian-ny/errmsg.sys usr/share/mysql/my-small.cnf usr/share/mysql/mysql-log-rotate usr/share/mysql/italian/ usr/share/mysql/italian/errmsg.sys usr/share/mysql/my-large.cnf usr/share/mysql/ndb-config-2-node.ini usr/share/mysql/binary-configure usr/share/mysql/mi_test_all usr/share/mysql/mysqld_multi.server usr/share/mysql/my-innodb-heavy-4G.cnf usr/doc/ usr/doc/mysql-5.0.67/ usr/doc/mysql-5.0.67/README usr/doc/mysql-5.0.67/Docs/ usr/doc/mysql-5.0.67/Docs/INSTALL-BINARY usr/doc/mysql-5.0.67/COPYING usr/info/ usr/info/mysql.info.gz usr/libexec/ usr/libexec/mysqld usr/libexec/mysqlmanager usr/man/ usr/man/man8/ usr/man/man8/mysqlmanager.8.gz usr/man/man8/mysqld.8.gz usr/man/man1/ usr/man/man1/mysql_zap.1.gz usr/man/man1/mysql_setpermission.1.gz usr/man/man1/mysql_tzinfo_to_sql.1.gz usr/man/man1/msql2mysql.1.gz usr/man/man1/mysql_tableinfo.1.gz usr/man/man1/mysql_explain_log.1.gz usr/man/man1/mysqlcheck.1.gz usr/man/man1/comp_err.1.gz usr/man/man1/my_print_defaults.1.gz usr/man/man1/mysqlbinlog.1.gz usr/man/man1/myisam_ftdump.1.gz usr/man/man1/mysql_upgrade.1.gz usr/man/man1/mysql.1.gz usr/man/man1/mysql_client_test.1.gz usr/man/man1/resolve_stack_dump.1.gz usr/man/man1/mysql_fix_extensions.1.gz usr/man/man1/mysqlmanagerc.1.gz usr/man/man1/mysql_config.1.gz usr/man/man1/mysqlshow.1.gz usr/man/man1/myisamlog.1.gz usr/man/man1/replace.1.gz usr/man/man1/mysqlmanager-pwgen.1.gz usr/man/man1/mysqltest.1.gz usr/man/man1/innochecksum.1.gz usr/man/man1/mysqladmin.1.gz usr/man/man1/perror.1.gz usr/man/man1/mysql_waitpid.1.gz usr/man/man1/mysql_convert_table_format.1.gz usr/man/man1/mysqlman.1.gz usr/man/man1/mysqlimport.1.gz usr/man/man1/mysqlbug.1.gz usr/man/man1/mysql_find_rows.1.gz usr/man/man1/myisampack.1.gz usr/man/man1/myisamchk.1.gz usr/man/man1/mysql_fix_privilege_tables.1.gz usr/man/man1/mysql-stress-test.pl.1.gz usr/man/man1/resolveip.1.gz usr/man/man1/make_win_bin_dist.1.gz usr/man/man1/mysqlhotcopy.1.gz usr/man/man1/mysqld_multi.1.gz usr/man/man1/safe_mysqld.1.gz usr/man/man1/mysql_secure_installation.1.gz usr/man/man1/mysql_install_db.1.gz usr/man/man1/mysqldump.1.gz usr/man/man1/mysql-test-run.pl.1.gz usr/man/man1/mysqld_safe.1.gz usr/man/man1/mysqlaccess.1.gz usr/man/man1/mysql.server.1.gz usr/man/man1/make_win_src_distribution.1.gz etc/ etc/rc.d/ etc/rc.d/rc.mysqld.new etc/my-huge.cnf etc/my-medium.cnf etc/my-small.cnf etc/my-large.cnf /etc/rc.d/rc.mysqld is an ordinary Slackware-type start/stop script: #!/bin/sh # Start/stop/restart mysqld. # # Copyright 2003 Patrick J. Volkerding, Concord, CA # Copyright 2003 Slackware Linux, Inc., Concord, CA # # This program comes with NO WARRANTY, to the extent permitted by law. # You may redistribute copies of this program under the terms of the # GNU General Public License. # To start MySQL automatically at boot, be sure this script is executable: # chmod 755 /etc/rc.d/rc.mysqld # Before you can run MySQL, you must have a database. To install an initial # database, do this as root: # # su - mysql # mysql_install_db # # Note that step one is becoming the mysql user. It's important to do this # before making any changes to the database, or mysqld won't be able to write # to it later (this can be fixed with 'chown -R mysql.mysql /var/lib/mysql'). # To allow outside connections to the database comment out the next line. # If you don't need incoming network connections, then leave the line # uncommented to improve system security. #SKIP="--skip-networking" # Start mysqld: mysqld_start() { if [ -x /usr/bin/mysqld_safe ]; then # If there is an old PID file (no mysqld running), clean it up: if [ -r /var/run/mysql/mysql.pid ]; then if ! ps axc | grep mysqld 1> /dev/null 2> /dev/null ; then echo "Cleaning up old /var/run/mysql/mysql.pid." rm -f /var/run/mysql/mysql.pid fi fi /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/run/mysql/mysql.pid $SKIP & fi } # Stop mysqld: mysqld_stop() { # If there is no PID file, ignore this request... if [ -r /var/run/mysql/mysql.pid ]; then killall mysqld # Wait at least one minute for it to exit, as we don't know how big the DB is... for second in 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 \ 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 60 ; do if [ ! -r /var/run/mysql/mysql.pid ]; then break; fi sleep 1 done if [ "$second" = "60" ]; then echo "WARNING: Gave up waiting for mysqld to exit!" sleep 15 fi fi } # Restart mysqld: mysqld_restart() { mysqld_stop mysqld_start } case "$1" in 'start') mysqld_start ;; 'stop') mysqld_stop ;; 'restart') mysqld_restart ;; *) echo "usage $0 start|stop|restart" esac But there's also an unexpected init script on the machine, named /etc/init.d/mysql: #!/bin/sh # Copyright Abandoned 1996 TCX DataKonsult AB & Monty Program KB & Detron HB # This file is public domain and comes with NO WARRANTY of any kind # MySQL daemon start/stop script. # Usually this is put in /etc/init.d (at least on machines SYSV R4 based # systems) and linked to /etc/rc3.d/S99mysql and /etc/rc0.d/K01mysql. # When this is done the mysql server will be started when the machine is # started and shut down when the systems goes down. # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 64 36 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO # If you install MySQL on some other places than /usr, then you # have to do one of the following things for this script to work: # # - Run this script from within the MySQL installation directory # - Create a /etc/my.cnf file with the following information: # [mysqld] # basedir=<path-to-mysql-installation-directory> # - Add the above to any other configuration file (for example ~/.my.ini) # and copy my_print_defaults to /usr/bin # - Add the path to the mysql-installation-directory to the basedir variable # below. # # If you want to affect other MySQL variables, you should make your changes # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files. # If you change base dir, you must also change datadir. These may get # overwritten by settings in the MySQL configuration files. #basedir= #datadir= # Default value, in seconds, afterwhich the script should timeout waiting # for server start. # Value here is overriden by value in my.cnf. # 0 means don't wait at all # Negative numbers mean to wait indefinitely service_startup_timeout=900 # The following variables are only set for letting mysql.server find things. # Set some defaults pid_file=/var/run/mysql/mysql.pid server_pid_file=/var/run/mysql/mysql.pid use_mysqld_safe=1 user=mysql if test -z "$basedir" then basedir=/usr bindir=/usr/bin if test -z "$datadir" then datadir=/var/lib/mysql fi sbindir=/usr/sbin libexecdir=/usr/libexec else bindir="$basedir/bin" if test -z "$datadir" then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" fi # datadir_set is used to determine if datadir was set (and so should be # *not* set inside of the --basedir= handler.) datadir_set= # # Use LSB init script functions for printing messages, if possible # lsb_functions="/lib/lsb/init-functions" if test -f $lsb_functions ; then . $lsb_functions else log_success_msg() { echo " SUCCESS! $@" } log_failure_msg() { echo " ERROR! $@" } fi PATH=/sbin:/usr/sbin:/bin:/usr/bin:$basedir/bin export PATH mode=$1 # start or stop shift other_args="$*" # uncommon, but needed when called from an RPM upgrade action # Expected: "--skip-networking --skip-grant-tables" # They are not checked here, intentionally, as it is the resposibility # of the "spec" file author to give correct arguments only. case `echo "testing\c"`,`echo -n testing` in *c*,-n*) echo_n= echo_c= ;; *c*,*) echo_n=-n echo_c= ;; *) echo_n= echo_c='\c' ;; esac parse_server_arguments() { for arg do case "$arg" in --basedir=*) basedir=`echo "$arg" | sed -e 's/^[^=]*=//'` bindir="$basedir/bin" if test -z "$datadir_set"; then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" ;; --datadir=*) datadir=`echo "$arg" | sed -e 's/^[^=]*=//'` datadir_set=1 ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --pid-file=*) server_pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --service-startup-timeout=*) service_startup_timeout=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --use-mysqld_safe) use_mysqld_safe=1;; --use-manager) use_mysqld_safe=0;; esac done } parse_manager_arguments() { for arg do case "$arg" in --pid-file=*) pid_file=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; --user=*) user=`echo "$arg" | sed -e 's/^[^=]*=//'` ;; esac done } wait_for_pid () { verb="$1" manager_pid="$2" # process ID of the program operating on the pid-file i=0 avoid_race_condition="by checking again" while test $i -ne $service_startup_timeout ; do case "$verb" in 'created') # wait for a PID-file to pop into existence. test -s $pid_file && i='' && break ;; 'removed') # wait for this PID-file to disappear test ! -s $pid_file && i='' && break ;; *) echo "wait_for_pid () usage: wait_for_pid created|removed manager_pid" exit 1 ;; esac # if manager isn't running, then pid-file will never be updated if test -n "$manager_pid"; then if kill -0 "$manager_pid" 2>/dev/null; then : # the manager still runs else # The manager may have exited between the last pid-file check and now. if test -n "$avoid_race_condition"; then avoid_race_condition="" continue # Check again. fi # there's nothing that will affect the file. log_failure_msg "Manager of pid-file quit without updating file." return 1 # not waiting any more. fi fi echo $echo_n ".$echo_c" i=`expr $i + 1` sleep 1 done if test -z "$i" ; then log_success_msg return 0 else log_failure_msg return 1 fi } # Get arguments from the my.cnf file, # the only group, which is read from now on is [mysqld] if test -x ./bin/my_print_defaults then print_defaults="./bin/my_print_defaults" elif test -x $bindir/my_print_defaults then print_defaults="$bindir/my_print_defaults" elif test -x $bindir/mysql_print_defaults then print_defaults="$bindir/mysql_print_defaults" else # Try to find basedir in /etc/my.cnf conf=/etc/my.cnf print_defaults= if test -r $conf then subpat='^[^=]*basedir[^=]*=\(.*\)$' dirs=`sed -e "/$subpat/!d" -e 's//\1/' $conf` for d in $dirs do d=`echo $d | sed -e 's/[ ]//g'` if test -x "$d/bin/my_print_defaults" then print_defaults="$d/bin/my_print_defaults" break fi if test -x "$d/bin/mysql_print_defaults" then print_defaults="$d/bin/mysql_print_defaults" break fi done fi # Hope it's in the PATH ... but I doubt it test -z "$print_defaults" && print_defaults="my_print_defaults" fi # # Read defaults file from 'basedir'. If there is no defaults file there # check if it's in the old (depricated) place (datadir) and read it from there # extra_args="" if test -r "$basedir/my.cnf" then extra_args="-e $basedir/my.cnf" else if test -r "$datadir/my.cnf" then extra_args="-e $datadir/my.cnf" fi fi parse_server_arguments `$print_defaults $extra_args mysqld server mysql_server mysql.server` # Look for the pidfile parse_manager_arguments `$print_defaults $extra_args manager` # # Set pid file if not given # if test -z "$pid_file" then pid_file=$datadir/mysqlmanager-`/bin/hostname`.pid else case "$pid_file" in /* ) ;; * ) pid_file="$datadir/$pid_file" ;; esac fi if test -z "$server_pid_file" then server_pid_file=$datadir/`/bin/hostname`.pid else case "$server_pid_file" in /* ) ;; * ) server_pid_file="$datadir/$server_pid_file" ;; esac fi case "$mode" in 'start') # Start daemon # Safeguard (relative paths, core dumps..) cd $basedir manager=$bindir/mysqlmanager if test -x $libexecdir/mysqlmanager then manager=$libexecdir/mysqlmanager elif test -x $sbindir/mysqlmanager then manager=$sbindir/mysqlmanager fi echo $echo_n "Starting MySQL" if test -x $manager -a "$use_mysqld_safe" = "0" then if test -n "$other_args" then log_failure_msg "MySQL manager does not support options '$other_args'" exit 1 fi # Give extra arguments to mysqld with the my.cnf file. This script may # be overwritten at next upgrade. $manager --user=$user --pid-file=$pid_file >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysqlmanager fi exit $return_value elif test -x $bindir/mysqld_safe then # Give extra arguments to mysqld with the my.cnf file. This script # may be overwritten at next upgrade. pid_file=$server_pid_file $bindir/mysqld_safe --datadir=$datadir --pid-file=$server_pid_file $other_args >/dev/null 2>&1 & wait_for_pid created $!; return_value=$? # Make lock for RedHat / SuSE if test -w /var/lock/subsys then touch /var/lock/subsys/mysql fi exit $return_value else log_failure_msg "Couldn't find MySQL manager ($manager) or server ($bindir/mysqld_safe)" fi ;; 'stop') # Stop daemon. We use a signal here to avoid having to know the # root password. # The RedHat / SuSE lock directory to remove lock_dir=/var/lock/subsys/mysqlmanager # If the manager pid_file doesn't exist, try the server's if test ! -s "$pid_file" then pid_file=$server_pid_file lock_dir=/var/lock/subsys/mysql fi if test -s "$pid_file" then mysqlmanager_pid=`cat $pid_file` echo $echo_n "Shutting down MySQL" kill $mysqlmanager_pid # mysqlmanager should remove the pid_file when it exits, so wait for it. wait_for_pid removed "$mysqlmanager_pid"; return_value=$? # delete lock for RedHat / SuSE if test -f $lock_dir then rm -f $lock_dir fi exit $return_value else log_failure_msg "MySQL manager or server PID file could not be found!" fi ;; 'restart') # Stop the service and regardless of whether it was # running or not, start it again. if $0 stop $other_args; then $0 start $other_args else log_failure_msg "Failed to stop running server, so refusing to try to start." exit 1 fi ;; 'reload'|'force-reload') if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file kill -HUP $mysqld_pid && log_success_msg "Reloading service MySQL" touch $server_pid_file else log_failure_msg "MySQL PID file could not be found!" exit 1 fi ;; 'status') # First, check to see if pid file exists if test -s "$server_pid_file" ; then read mysqld_pid < $server_pid_file if kill -0 $mysqld_pid 2>/dev/null ; then log_success_msg "MySQL running ($mysqld_pid)" exit 0 else log_failure_msg "MySQL is not running, but PID file exists" exit 1 fi else # Try to find appropriate mysqld process mysqld_pid=`pidof $sbindir/mysqld` if test -z $mysqld_pid ; then if test "$use_mysqld_safe" = "0" ; then lockfile=/var/lock/subsys/mysqlmanager else lockfile=/var/lock/subsys/mysql fi if test -f $lockfile ; then log_failure_msg "MySQL is not running, but lock exists" exit 2 fi log_failure_msg "MySQL is not running" exit 3 else log_failure_msg "MySQL is running but PID file could not be found" exit 4 fi fi ;; *) # usage echo "Usage: $0 {start|stop|restart|reload|force-reload|status} [ MySQL server options ]" exit 1 ;; esac exit 0 An unimportant aside: The previous users of the machine kept a messy home directory. Their home directory was /root. I've pasted a copy at http://www.pastebin.ca/2167496. My question: Why is there a /etc/init.d/mysql file on this Slackware machine? How could it have gotten there? P.S. This question is far from perfect. Please feel free to edit it.

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • IIS 7.5 FTPS external access - 534 Policy requires SSL

    - by markmnl
    I have setup a FTP site that requires SSL but when I try connect to it externally I get the error: 220 Microsoft FTP Service 534 Policy requires SSL. I know - I set it so! Why doesnt it fetch the SSL cert from the site and allow me to logon?! (Incidentally beware of all the tutorials that Allow but do not Require SSL - while that will solve the problem it will be because SSL is not being used!). I suspect it may be I need a client that supports FTPS (FTP over SSL) and Windows explorer just uses IE which does not. But trying FileZilla and WinSCP I get a little further but then it hangs on TLS/SSL negotiation expecting a response from the server.... UPDATE: I have tried (from: http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/): Configure the Passive Port Range for the FTP Service. Configure the external IPv4 Address for a Specific FTP Site. Configure the firewall to allow the FTP service to listen on all ports that it opens. Disabling stateful FTP filtering so that Windows Firewall will not block FTP traffic. And still I get (in FileZilla trying both Active and Passive): Status: Connecting to 203.x.x.x:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: AUTH TLS Response: 234 AUTH command ok. Expecting TLS Negotiation. Status: Initializing TLS... Error: Connection timed out Error: Could not connect to server The Windows firewall logs unhelpfully have nothing to say.. UPDATE2: Turning the firewall off does not resolve the problem. I cannot believe how difficult it is to get something so simple to work and even once following the documentation it does not work. UPDATE3: Running FileZilla locally connecting through the loopback works in Active mode, in Passive mode I get up to: Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -53: Error in the push function. Turning the firewall off at both ends I can still not connect the client and get the same error as above.

    Read the article

  • VMware Workstation reboot 32-bit host when starting 32/64-bit guest using VT-x

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas? Update 1: I've tried to boot vt.iso provided at http://communities.vmware.com/docs/DOC-8978 and here is it report: CPU 0: VT is enabled on this core CPU 1: VT is enabled on this core Update 2: I've just tried to boot 32-bit guests (Windows7, Ubuntu9.04 and Gentoo) using all possible virtualization modes. In Automatic, Automatic with Replay, Binary translation everything works, in Intel VT-x/EPT or AMD-V/RVI I got message "This host does not support EPT. Using software virtualization with a software MMU." and everything works. BUT in Intel VT-x or AMD-V mode all 32-bit guests reset host just like 64-bit guests! So, this issue is not specific to 64-bit guests. One more thing. Using Intel VT-x or AMD-V mode for both 32/64-bit guests my host reset right after starting VM, i.e. before VM BIOS POST and before guest even start booting. But using Intel VT-x/EPT or AMD-V/RVI VM BIOS runs ok, then 64-bit guests start booting (Windows7 completed "Loading files" progressbar), and only after that host reset.

    Read the article

  • OSX 10.6.6 SSH md5 break-in check

    - by Alex
    Information Recently one of the linux servers that I access was compromised to steal passwords and ssh keys using a modified ssh binary. This lead me to question if the attacker had compromised my OSX Laptop which had ssh access turned on. A sophos virus scan turned up nothing, and I did not have rkhunter installed before the attack, so I could not compare hashes of the system binaries to be sure. However because OSX is relatively standard for each of their major releases, I asked fiends for md5 hashes md5 /usr/bin/ssh and md5 /usr/sbin/sshd as a basic first check to see if there was anything different about my machine. A few emails later I have found the following data: Version (Arch) [N] MD5 (/usr/bin/ssh) MD5 (/usr/sbin/sshd) OSX 10.5.8 (PPC) [3] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.5.8 (intel) [5] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.6.x (intel) [7] 591fbe723011c17b6ce41c537353b059 e781fad4fc86cf652f6df22106e0bf0e OSX 10.6.x (intel) [4] 58be068ad5e575c303ec348a1c71d48b 33dafd419194b04a558c8404b484f650 Mine 10.6.6 (intel) df344cc00a294c91230c65e8b7332a79 b5094ccf4cd074aaf573d4f5df75906a where N is the number of machines with with that MD5, and the last row is my laptop. The sample is relatively heterogeneous spaning a few years of different makes and models of Apples, and different versions of 10.6.x. The different hash for my system made me worried that these binaries might have been compromised. So I made sure that my backup for the week was good, and dived into formatting my system and reinstalling OSX. After reinstalling OSX from the manufacturer DVD, I found that the MD5 hash did not change for either ssh, or sshd. Goal Make sure that my system is does not have any malicious software. Should I be worried that this base install of OSX (with no other software installed) has been compromised? I have also updated my system to 10.6.6 and found no change as well. Other Information I am not sure if this is helpful information, but my laptop is a i7 15 inch MacBook Pro bought in Nov 2010, and here is some output from system_profiler: System Software Overview: System Version: Mac OS X 10.6.6 (10J567) Kernel Version: Darwin 10.6.0 64-bit Kernel and Extensions: No Time since boot: 1:37 Hardware: Hardware Overview: Model Name: MacBook Model Identifier: MacBook6,2 Processor Name: Intel Core i7 Processor Speed: 2.66 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache (per core): 256 KB L3 Cache: 4 MB Memory: 4 GB Processor Interconnect Speed: 4.8 GT/s Boot ROM Version: MBP61.0057.B0C SMC Version (system): 1.58f16 Sudden Motion Sensor: State: Enabled On the laptop, I find: $ codesign -vvv /usr/bin/ssh /usr/bin/ssh: valid on disk /usr/bin/ssh: satisfies its Designated Requirement $ codesign -vvv /usr/sbin/sshd /usr/sbin/sshd: valid on disk /usr/sbin/sshd: satisfies its Designated Requirement $ ls -la /usr/bin/ssh -rwxr-xr-x 1 root wheel 1001520 Feb 11 2010 /usr/bin/ssh $ ls -la /usr/sbin/sshd -rwxr-xr-x 1 root wheel 1304800 Feb 11 2010 /usr/sbin/sshd $ ls -la /sbin/md5 -r-xr-xr-x 1 root wheel 65232 May 18 2009 /sbin/md5 Update So far I have not gotten an answer about this question, but if you could help by increasing the number of hashes that I can compare against, that would be great. To get hashes, and version numbers, run the following on osx: md5 /usr/bin/ssh md5 /usr/sbin/sshd ssh -V sw_vers

    Read the article

  • Error 80073701 when installing Windows 7 Service Pack 1

    - by Wagner Maestrelli
    I tried to install the Windows 7 Service Pack 1 using Windows Update and I got an error (code 80073701 - unknown error). I tried it again, same thing. Rebooted and tried again, same error. Before I tried to install the SP1 I had installed all the previous updates. I have Windows 7 Ultimate 32-bits. Has anyone gone through the same problem? Any ideas of what might be happening? Thanks! UPDATE: I installed the System Update Readiness Tool. Then, I tried to install the SP1 again, but the installation failed again with the same error. As I thought I was running out of options, I downloaded the SP1 package (500+ MB) and tried to install manually. Before that, I reinstalled the SUR Update. Well, the manual installation of the SP1 failed again. Then I learned about the c:\Windows\Logs\CBS\CheckSUR.log file (thanks Patches!). I checked it out. As I installed the SUR Update multiple times, the older logs are kept in the c:\Windows\Logs\CBS\CheckSUR.persist.log file. In the first time the SUR update was installed there was an error, which is said to have been fixed. In the subsequent logs, no errors were detected. The log with the error: ================================= Checking System Update Readiness. Binary Version 6.1.7600.20593 Package Version 7.0 2010-03-19 09:57 Checking Windows Servicing Packages Checking Package Manifests and Catalogs (f) CBS MUM Corrupt 0x800B0100 servicing\Packages\Microsoft-Windows-Client-LanguagePack-Package~31bf3856ad364e35~x86~pt-BR~6.1.7600.16385.mum servicing\Packages\Microsoft-Windows-Client-LanguagePack-Package~31bf3856ad364e35~x86~pt-BR~6.1.7600.16385.cat Package manifest cannot be validated by the corresponding catalog (fix) CBS MUM Corrupt CBS File Replaced Microsoft-Windows-Client-LanguagePack-Package~31bf3856ad364e35~x86~pt-BR~6.1.7600.16385.mum from Cabinet: C:\Windows\CheckSur\v1.0\windows6.1-rtm-client-cab3-x86.cab. (fix) CBS Paired File CBS File also Replaced Microsoft-Windows-Client-LanguagePack-Package~31bf3856ad364e35~x86~pt-BR~6.1.7600.16385.cat from Cabinet: C:\Windows\CheckSur\v1.0\windows6.1-rtm-client-cab3-x86.cab. Checking Package Watchlist Checking Component Watchlist Checking Packages Checking Component Store Summary: Seconds executed: 224 Found 1 errors Fixed 1 errors CBS MUM Corrupt Total count: 1 Fixed: CBS MUM Corrupt. Total count: 1 Fixed: CBS Paired File. Total count: 1 It seems it has something to do with the Brazilian Portuguese Language Pack, which happens to be my native language. Problem is I can't uninstall the language pack since it is my system default language. And I haven't found any place to download it so I could reinstall it manually. Well, any ideas? Thanks!

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • How do I protect a low budget network from rogue DHCP servers?

    - by Kenned
    I am helping a friend manage a shared internet connection in an apartment buildling with 80 apartments - 8 stairways with 10 apartments in each. The network is laid out with the internet router at one end of the building, connected to a cheap non-managed 16 port switch in the first stairway where the first 10 apartments are also connected. One port is connected to another 16 port cheapo switch in the next stairway, where those 10 apartments are connected, and so forth. Sort of a daisy chain of switches, with 10 apartments as spokes on each "daisy". The building is a U-shape, approximately 50 x 50 meters, 20 meters high - so from the router to the farthest apartment it’s probably around 200 meters including up-and-down stairways. We have a fair bit of problems with people hooking up wifi-routers the wrong way, creating rogue DHCP servers which interrupt large groups of the users and we wish to solve this problem by making the network smarter (instead of doing a physical unplugging binary search). With my limited networking skills, I see two ways - DHCP-snooping or splitting the entire network into separate VLANS for each apartment. Separate VLANS gives each apartment their own private connection to the router, while DHCP snooping will still allow LAN gaming and file sharing. Will DHCP snooping work with this kind of network topology, or does that rely on the network being in a proper hub-and-spoke-configuration? I am not sure if there are different levels of DHCP snooping - say like expensive Cisco switches will do anything, but inexpensive ones like TP-Link, D-Link or Netgear will only do it in certain topologies? And will basic VLAN support be good enough for this topology? I guess even cheap managed switches can tag traffic from each port with it’s own VLAN tag, but when the next switch in the daisy chain receives the packet on it’s “downlink” port, wouldn’t it strip or replace the VLAN tag with it’s own trunk-tag (or whatever the name is for the backbone traffic). Money is tight, and I don’t think we can afford professional grade Cisco (I have been campaigning for this for years), so I’d love some advice on which solution has the best support on low-end network equipment and if there are some specific models that are recommended? For instance low-end HP switches or even budget brands like TP-Link, D-Link etc. If I have overlooked another way to solve this problem it is due to my lack of knowledge. :)

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Does VMware_ThinApp_4.0.3_169725.msi contain Trojan.Win32.Vapsup in it? [closed]

    - by Joe
    Today I ran a full system scan using Online Armor++. It detected Trojans in the installer. I have had this installer on the computer for many months and I do not remember if I ever installed it on this PC or not. For some reason I unpacked the installer with 7zip though. I was probably going to attempt to make it portable. Anyway so I had the installer in a folder, and another folder next to it with all of the installers files unpacked. The VMwareVS.cab file that was extracted from the installer, also had its files extracted into another folder. This was all done many months ago. OA++ did not detect the installer itself as as Trojan VMwareVS.cab, but it did detect 4 of the files that I had unpacked as Trojans. Here are the details of what the scan detected on my PC today. Note: I uploaded these files to VirusTotal....the Ikarus and A-squared engines(the engines from Online Armor++) are not detecting anything. But some of the other engines are detecting the same Trojan that OA++ detected(Trojan.Win32.Vapsup). C:\Downloads\VMware_ThinApp_4.0.3_169725.msi [This file was not detected by the Virus Scan as infected] CRC-32: 50189335 MD5: 9e32e3272d2637fb6e0759a604879e6f SHA-1: 19ef5a6d586ddcc5b9222ba57b0f14159655f3f8 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS.cab [This file was not detected by the Virus Scan as infected] CRC-32: d3a9694a MD5: ddc278a8fe0a25486277d9800e6af85a SHA-1: 456b731c8b6fdb7a1d7bcff3d1fbe9df58ccc73a Online Armor++ Virus Scan Results: Detected Trojan.Win32.Vapsup.vee!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\Binary.ThinstallProcess CRC-32: 4888b13c MD5: 4884cb4622278c0835b9a5dcd2ae0473 SHA-1: ed879ae65147805dd69e1355c17df814b9d434ce Detected Trojan.Win32.Vapsup.vef!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\AppSync.exe CRC-32: fd20b378 MD5: cbdcdd590f7ffc52b6ce68fa11f2bda4 SHA-1: aebf685e02d6693df9eaa92c67dc5746792b5ecf Detected Trojan.Win32.Vapsup.veg!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\logging.dll CRC-32: 8adee5d5 MD5: 56ff9b83f58ba8eacb6e939aa4759bf0 SHA-1: b52fa38765a25fe6a2c4f60d76545a4dd64904eb Detected Trojan.Win32.Vapsup.vek!A2 C:\Downloads\VMware_ThinApp_4.0.3_169725\VMwareVS\thinreg.exe CRC-32: 423c5652 MD5: c436feff8d9096e7475c84a6bca6096c SHA-1: 685b84af796132ce144aacd6ff23379e17ddf1a7 Are these files indeed infected by this Trojan, or is it just a false positive? Does anybody have the same version of the original installer, who could find out if the Checksums of the installer and unpacked files match? Should I be worried about whether this Trojan has spread and infected my machine? Thanks in advance for any help!

    Read the article

  • l2tp server always 'sent [CCP ResetReq id=0x3]' when got compressed data request

    - by wilbur
    I have built a xl2tpd/ipsec server on my ubuntu 12.04.3, and I managed to make a l2tp vpn connection to the xl2tpd server from my android phone. The xl2tpd log said xl2tpd[10828]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[10828]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[10828]: setsockopt recvref[22]: Protocol not available xl2tpd[10828]: This binary does not support kernel L2TP. xl2tpd[10828]: xl2tpd version xl2tpd-1.2.8 started on atime.me PID:10828 xl2tpd[10828]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[10828]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[10828]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[10828]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[10828]: Listening on IP address 0.0.0.0, port 1701 xl2tpd[10828]: control_finish: Peer requested tunnel 39154 twice, ignoring second one. xl2tpd[10828]: Connection established to 117.136.8.59, 43149. Local: 25339, Remote: 39154 (ref=0/0). LNS session is 'default' However I cannot access the web in my browser. The pppd log said rcvd [Compressed data] 00 1d 82 c4 7c 04 d8 09 ... sent [CCP ResetReq id=0x7] I have googled a lot and found that this was mostly caused by a mppe decompression error. I have disabled BSD-Compress compression with nobsdcomp in /etc/ppp/xl2tpd-options but it did not work. I used openswan-2.6.33 and xl2tpd-1.2.8 which were built from source. And my configurations: /etc/ipsec.conf version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=106.186.121.214 leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] local ip = 10.10.11.1 ip range = 10.10.11.2-10.10.11.245 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/xl2tpd-options length bit = yes /etc/ppp/xl2tpd-options require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4 debug nobsdcomp Any suggestions? Thanks in advance.

    Read the article

  • nginx 502 bad gateway - fastcgi not listening? (Debian 5)

    - by Sean
    I have experience with nginx but it's always been pre-installed for me (via VPS.net pre-configured image). I really like what it does for me, and now I'm trying to install it on my own server with apt-get. This is a fairly fresh Debian 5 install. I have few extra packages installed but they're all .deb's, no manual compiling or anything crazy going on. Apache is already installed but I disabled it. I did apt-get install nginx and that worked fine. Changed the config around a bit for my needs, although the same problem I'm about to describe happens even with the default config. It took me a while to figure out that the default debian package for nginx doesn't spawn fastcgi processes automatically. That's pretty lame, but I figured out how to do that with this script, which I found posted on many different web sites: #!/bin/bash ## ABSOLUTE path to the PHP binary PHPFCGI="/usr/bin/php5-cgi" ## tcp-port to bind on FCGIPORT="9000" ## IP to bind on FCGIADDR="127.0.0.1" ## number of PHP children to spawn PHP_FCGI_CHILDREN=10 ## number of request before php-process will be restarted PHP_FCGI_MAX_REQUESTS=1000 # allowed environment variables sperated by spaces ALLOWED_ENV="ORACLE_HOME PATH USER" ## if this script is run as root switch to the following user USERID=www-data ################## no config below this line if test x$PHP_FCGI_CHILDREN = x; then PHP_FCGI_CHILDREN=5 fi ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_CHILDREN" ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_MAX_REQUESTS" ALLOWED_ENV="$ALLOWED_ENV FCGI_WEB_SERVER_ADDRS" if test x$UID = x0; then EX="/bin/su -m -c \"$PHPFCGI -q -b $FCGIADDR:$FCGIPORT\" $USERID" else EX="$PHPFCGI -b $FCGIADDR:$FCGIPORT" fi echo $EX # copy the allowed environment variables E= for i in $ALLOWED_ENV; do E="$E $i=${!i}" done # clean environment and set up a new one nohup env - $E sh -c "$EX" &> /dev/null & When I do a "ps -A | grep php5-cgi", I see the 10 processes running, that should be ready to listen. But when I try to view a web page via nginx, I just get a 502 bad gateway error. After futzing around a bit, I tried telneting to 127.0.0.1 9000 (fastcgi is listening on port 9000, and nginx is configured to talk to that port), but it just immediately closes the connection. This makes me think the problem is with fastcgi, but I'm not sure what I can do to test it. It may just be closing the connection because it's not getting fed any data to process, but it closes immediately so that makes me think otherwise. So... any advice? I can't figure it out. It doesn't help that it's 1AM, but I'm going crazy here!

    Read the article

  • Anyone else experiencing high rates of Linux server crashes during a leap second day?

    - by Bron Gondwana
    POSTMORTEM Anticlimax: only thing that died was my VPN (openvpn) link to the cluster, so there was an exciting few seconds while it re-established. Everything else was fine. Starting back ntp everywhere. If you look at Marco's blog at http://my.opera.com/marcomarongiu/blog/2012/06/01/an-humble-attempt-to-work-around-the-leap-second - he has a solution for phasing the time change over 24 hours using ntpd -x to avoid the 1 second skip. Give that a go if it matters to you. For the systems I run, the jump isn't a problem. Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21. THE WORKAROUND Ok people, here's how I worked around it. disabled ntp: /etc/init.d/ntp stop created http://linux.brong.fastmail.fm/2012-06-30/fixtime.pl (code stolen from Marco, see blog posts in comments) ran fixtime.pl without an argument to see that there was a leap second set ran fixtime.pl with an argument to remove the leap second NOTE: depends on adjtimex. I've put a copy of the squeeze adjtimex binary at http://linux.brong.fastmail.fm/2012-06-30/adjtimex - it will run without dependencies on a squeeze 64 bit system. If you put it in the same directory as fixtime.pl, it will be used if the system one isn't present. Obviously if you don't have squeeze 64 bit... find your own. I'm going to start ntp again tomorrow. As an anonymous user suggested - an alternative to running adjtimex is to just set the time yourself, which will presumably also clear the leapsecond counter.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • FTP not listing files behind firewall (setsockopt (ignored): Permission denied)

    - by KennyDs
    We are developing a Magento application that has a module that works with FTP. Today we deployed this on the testing environment which is setup in the following way: Gateway server which has the following iptables rules: # iptables -L -n -v Chain INPUT (policy ACCEPT 2 packets, 130 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 165 13720 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 7 packets, 606 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 15 965 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- eth1 eth1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 126 packets, 31690 bytes) pkts bytes target prot opt in out source destination These are set at runtime via the following bash script: #!/bin/sh PATH=/usr/sbin:/sbin:/bin:/usr/bin # # delete all existing rules. # iptables -F iptables -t nat -F iptables -t mangle -F iptables -X # Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # Allow established connections, and those not coming from the outside iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # Don't forward from the outside to the inside. iptables -A FORWARD -i eth1 -o eth1 -j REJECT # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward The gateway server is connected to the WAN via eth1 and is connected to the internal network via eth0. One of the servers from eth1 has the following problem when trying to list files over ftp: $ ftp -vd myftpserver.com Connected to myftpserver.com 220 Welcome to MY FTP Server ftp: setsockopt: Bad file descriptor Name (myftpserver.com:magento): XXXXXXXX ---> USER XXXXXXXX 331 User XXXXXXXX, password please Password: ---> PASS XXXX 230 Password Ok, User logged in ---> SYST 215 UNIX Type: L8 Remote system type is UNIX. Using binary mode to transfer files. ftp> ls ftp: setsockopt (ignored): Permission denied ---> PORT 192,168,19,15,135,75 421 Service not available, remote server has closed connection When I try listing the files in passive mode, same result. When I run the same command on the gateway server, everything works fine so I believe that the issue is happening because of the iptables rules not forwarding properly. Does anyone have an idea which rule I need to add to make this work?

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • Windows Server 2003 Standard R2 CD 1 cannot boot: freeze at No Emulation

    - by TGP1994
    Hi everyone. I've been interested in the Windows Server line of OSes, so since I apply for DreamSpark, I thought I'd go download it and try it. I just so happened to have an old desktop that I was using awhile ago for Windows XP, so I imaged the drive in preparation for it to be overwritten with the new OS. (This system has an Asus A7V8X-X motherboard, an AMD Athlon XP 2800+ processor, and 1GB of RAM.) I tried burning the first disk image on my newer desktop computer, running Windows XP, although the CD burner consistently failed at a particular track area from cd to cd, so it seemed like the burner was toast there. Fortunately, I had a laptop, so I transferred the images over to that, then burned the first disc there. First time around went great, and the burning program reported no errors. I then took the CD over to the computer that I was intending to install Server onto, set the BIOS to boot from the CD drive, then I booted it up. Like normal, after the POST, it printed "Boot from ATAPI CD-Rom: No Emulation", which I was used to seeing with bootable cds. I waited for the "Press any key to continue..." message that I had become so familiar with in windows discs, although I saw none. The computer sat there for about 5 seconds with the cd spinning, then it spun down like it was done reading it. Nothing else happened. No response from the keyboard. I tried again, same result. I then downloaded IMGBurn, and I put the burned cd into the laptop that burned it originally. I also downloaded a fresh image from the dreamspark site. I ran a verify session, and everything checked out. I later tried getting various DOS startup discs, then I tried booting the winnt binary, which supposedly initiates the installation process. Either the shells reported that not enough memory was available (since they would be running in low memory mode), or FreeDOS in particular would report Illegal instructions right away. Is the image corrupt at dreamspark, or am I doing something wrong?

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >