Search Results

Search found 3478 results on 140 pages for 'daniel sh'.

Page 125/140 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • Bash script dosn't open in terminal on reboot

    - by twigg
    Quick overview, I have created a script that reboots the laptop after x amount of time and x amount of cycles. I have added the script to the start-up applications and the script does seem to be running in the background but never opens a terminal Window. Am I missing something? Adding Code (this is saved in a file called countdown.sh) #!/bin/bash # check if passed.txt exists if it does, send to soak test if [ -f passed.txt ]; then echo reboot has passed $nol cycles sleep 5; echo Starting soak tests sleep 5; rm testlog.txt; rm passed.txt; phoronix-test-suite run quick-test exit 0; fi # check if file testlog.txt exists if not create it if [ ! -f testlog.txt ]; then echo >> testlog.txt; fi # read reboot file to see how many loops have been completed exec < testlog.txt nol=0 while read line do nol=`expr $nol + 1` done # start the countdown, x is time limit let x=10; while [ $x -gt 0 ]; do clear; figlet "Rebooting in..."; figlet $x; let x-=1; sleep 1; done; echo reboot success $nol >> testlog.txt; shutdown -r now; # set how many times the script should shutdown the laptop reboot_count=1 # if number of reboots matches nol's then stop the script # create a new text file called passed.txt if [ "$nol" == "$reboot_count" ]; then echo reboot passed $nol cycles >> passed.txt; fi

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • how to define service runlevel order position?

    - by DmitrySemenov
    I setup bind-dlz and need mysql start prior NAMED when system starts here is what I have [root@semenov]# ./test.sh mysql 0:off 1:off 2:on 3:on 4:on 5:on 6:off named 0:off 1:off 2:off 3:on 4:on 5:on 6:off lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S93mysql -> ../init.d/mysql lrwxrwxrwx. 1 root root 15 Apr 15 18:57 /etc/rc3.d/S90named -> ../init.d/named here is what I have in mysql init script # Comments to support chkconfig on RedHat Linux # chkconfig: 2345 84 16 # description: A very fast and reliable SQL database engine. # Comments to support LSB init script conventions ### BEGIN INIT INFO # Provides: mysql # Required-Start: $local_fs $network $remote_fs # Should-Start: ypbind nscd ldap ntpd xntpd # Required-Stop: $local_fs $network $remote_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: start and stop MySQL # Description: MySQL is a very fast and reliable SQL database engine. ### END INIT INFO so when I remove named from chkconfig and have there just mysql, it starts with order number 84: /etc/rc3.d/S84mysql - ../init.d/mysql but when I add named inside chkconfig it's order changes to 93: /etc/rc3.d/S93mysql - ../init.d/mysql as a result mysql will be starting after named and named will fail (no sql available) any ideas what I'm doing wrong? here is what I have in named init script # chkconfig: 345 90 16 # description: named (BIND) is a Domain Name Server (DNS) \ # that is used to resolve host names to IP addresses. # probe: true ### BEGIN INIT INFO # Provides: $named # Required-Start: $local_fs $network $syslog # Required-Stop: $local_fs $network $syslog # Default-Start:2 3 4 # Default-Stop: 0 1 2 3 4 5 6 # Short-Description: start|stop|status|restart|try-restart|reload|force-reload DNS server # Description: control ISC BIND implementation of DNS server ### END INIT INFO thanks, Dmitry

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • nmap installation issue

    - by daasf
    vanilla centos with latest updates, installed gcc, and after ./configure:.... Configuration complete. Type make (or gmake on some *BSD machines) to compile. [root@winxp nmap-5.51]# make Makefile:375: makefile.dep: No such file or directory g++ -MM -I./liblua -I./libdnet-stripped/include -I./libpcre -I./libpcap -I./nbase - I./nsock/include -DHAVE_CONFIG_H -DNMAP_NAME=\"Nmap\" -DNMAP_URL=\"http://nmap.org\" - DNMAP_PLATFORM=\"x86_64-unknown-linux-gnu\" -DNMAPDATADIR=\"/usr/local/share/nmap\" - D_FORTIFY_SOURCE=2 main.cc nmap.cc targets.cc tcpip.cc nmap_error.cc utils.cc idle_scan.cc osscan.cc osscan2.cc output.cc payload.cc scan_engine.cc timing.cc charpool.cc services.cc protocols.cc nmap_rpc.cc portlist.cc NmapOps.cc TargetGroup.cc Target.cc FingerPrintResults.cc service_scan.cc NmapOutputTable.cc MACLookup.cc nmap_tty.cc nmap_dns.cc traceroute.cc portreasons.cc xml.cc nse_main.cc nse_utility.cc nse_nsock.cc nse_dnet.cc nse_fs.cc nse_nmaplib.cc nse_debug.cc nse_pcrelib.cc nse_binlib.cc nse_bit.cc > makefile.dep /bin/sh: g++: command not found make: *** [makefile.dep] Error 127 [root@winxp nmap-5.51]# yum install g++ -y Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.ash.fastserv.com * base: centos.mirror.choopa.net * extras: mirror.trouble-free.net * updates: mirror.nexcess.net Setting up Install Process No package g++ available. Nothing to do [root@winxp nmap-5.51]#

    Read the article

  • Move 53,800+ files into 54 separate folders with ~1000 files each?

    - by ane
    Trying to import 53,800+ individual files (messages) using Gmail's POP fetcher. Gmail understandably refuses, giving the error: "Too many messages to download. There are too many messages on the other server." The folder in question looks like similar to: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S /usr/home/customer/Maildir/cur/1203677194.V57I586f26M688004.mail.net:2,S /usr/home/customer/Maildir/cur/1203679158.V57I586f2bM182864.mail.net:2,S /usr/home/customer/Maildir/cur/1203680493.V57I586f33M740378.mail.net:2,S /usr/home/customer/Maildir/cur/1203685837.V57I586f0bM835200.mail.net:2,S /usr/home/customer/Maildir/cur/1203687920.V57I586f65M995884.mail.net:2,S ... Using the shell (tcsh, sh, etc. on FreeBSD), what one-line command can I type to split this directory full of files into separate folders so Gmail only sees 1000 messages at a time? Something with find or ls | xargs mv maybe. Whatever is fastest. The desired output directory would now look something like: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S ... /usr/home/customer/set1/ (contains messages 1-1000) /usr/home/customer/set2/ (contains messages 1001-2000) /usr/home/customer/set3/ (etc.) Ideally, cron could run another command to automatically reverse the process in 1000 message increments every hour. So Gmail only sees & downloads 1000 at a time.

    Read the article

  • mysql.sock problem on Mac OS X, all Zend products

    - by Michael Stelly
    Hi folks. I posted this on the Zend forum, but I'm hoping I can get a speedier reply here. I've tried every solution provided on this forum with no luck. When I restart mysql, everything appears ok. sudo /usr/local/zend/bin/zendctl.sh restart Password: /usr/local/zend/bin/apachectl stop [OK] /usr/local/zend/bin/apachectl start [OK] Stopping Zend Server GUI [Lighttpd] [OK] spawn-fcgi: child spawned successfully: PID: 7943 Starting Zend Server GUI [Lighttpd] [OK] Stopping Java bridge [OK] Starting Java bridge [OK] Shutting down MySQL . SUCCESS! Starting MySQL . SUCCESS! Pinging locahost is also OK and resolve dns to IP. ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.048 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.076 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.064 ms But when I attempt to access the local url for my app, I get the dreaded: Message: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2). This is a show-stopper for me. I appreciate any assistance. Thank you.

    Read the article

  • Problems with bash script, mysql inserts and launchd

    - by Armands
    ========= I am developing a automated system, which consists of 3 parts: mysql, bash and launchd. Bash script takes folders of work related stuff, zips, archives and puts info about them into database that is located on a local MAMP server. Everything works as expected when I run the script from terminal. But when I use Launchd to automatically run this script, it functions without errors and it does not put the values into database. I've tried to make logs of returned messages, but the logs end up being empty as the command has run the way it was supposed to. Any help would be appreciated! .plist contents <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.adevo.ari.zip</string> <key>ProgramArguments</key> <array> <string>/Volumes/Archive-Plus/B-ARCHIVE-PLUS/ZZ_UTILITY_FOLDER/Compress.sh</string> </array> <key>Nice</key> <integer>1</integer> <key>StartInterval</key> <integer>120</integer> <key>RunAtLoad</key> <true/> </dict> </plist> I made this .plist file just by searching the web.

    Read the article

  • L2TP over IPSec VPN with OpenSwan and XL2TPD can't connect, timeout on Centos 6

    - by Disco
    I'm setting up LT2p over IPSec on my Centos 6.3 fresh install. I have iptables flushed, permit all. Whenever I try to connect, i get a 'no reply from vpn' and nothi Here's my ipsec.conf file (Server is 1.2.3.4) : config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=1.2.3.4 leftprotoport=17/1701 right=%any rightprotoport=17/%any My /etc/ipsec.secrets 1.2.3.4 %any: PSK "password" My sysctl.conf (appened lines) net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 Here's what 'ipsec verify' gives: # ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.32/K2.6.32-279.19.1.el6.x86_64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] And I see xl2tpd is listening on 1701/udp : udp 0 0 1.2.3.4:1701 0.0.0.0:* 2096/xl2tpd

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • Error related to pkg-config when installing frei0r as part of another package

    - by Anentropic
    I am trying to build https://github.com/mltframework/shotcut on OS X Lion (using their script in scripts/build_shotcut.sh) and after numerous hurdles I'm stuck on this error: ./configure: line 16062: syntax error near unexpected token `OPENCV,' ./configure: line 16062: `PKG_CHECK_MODULES(OPENCV, opencv >= 1.0.0, HAVE_OPENCV=true, true)' ERROR: Unable to configure frei0r From what I already googled this means that the PKG_CHECK_MODULES macro hasn't been defined, which probably means there's something wrong with my pkg-config, which I installed via Homebrew. Sounds like the pkg.m4 file isn't found. When I brew install pkg-config I get the following warning: Warning: m4 macros were installed to "share/aclocal". Homebrew does not append "/usr/local/share/aclocal" to "/usr/share/aclocal/dirlist". If an autoconf script you use requires these m4 macros, you'll need to add this path manually. Well I've appended that line to the dirlist file and it doesn't fix the problem above. Can anyone suggest a way forward here? I have briefly tried building my own pkg-config from source but (bizarrely) when I tried to ./configure I got the following error: checking for pkg-config... no ./configure: line 13540: --exists: command not found configure: error: pkg-config and glib-2.0 not found, please set GLIB_CFLAGS and GLIB_LIBS to the correct values if building pkg-config needs pkg-config it seems like a weird catch 22 situation... I think this is probably an unnecessary sidetrack anyway.

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • Adding Netem Filter Rules

    - by fontsix
    iam new in programming and using linux. My Question is, is it possible to add Netem Filter Rules later ? I want to create an PHP-Interface for Netem and I don't know how much filters were required. This should be some kind of dynamically. In Example : A user with a static IP starts an Netem Command (Latency) with PHP Interface this means these five command werde executed by php in the first step $classid = 11; $handle = 10; "sudo tc qdisc add dev eth0 handle 1: root htb"; "sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100Mbps"; "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; Now, if there would be a second user who wants to use Netem independent of the first user, i only want to execute the last 3 commands, like "sudo tc class add dev eth0 parent 1:1 classid 1:$classid htb rate 100Mbps"; "sudo tc qdisc add dev eth0 parent 1:$classid handle $handle: netem delay 100ms"; "sudo tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 match ip dst $dest flowid 1:$classid"; There is an Algorithmus for increasing variables $classid and $handle. This should work. Now my Question: Is it possible only to add these 3 commands to add a new class with new qdisc and a new filter rule ? Or how can i realize it ? The Apache Error_log tells me "sh: line 1: flowid: command not found" but i can't find any mistake. I hope you could help Best regards fontsix

    Read the article

  • erlyvideo server doesn't start automatically after reboot

    - by electroid
    I have installed erlyvideo server on ubuntu 9.10 karmic koala. Everything works fine, but after server reboot I have to start erlyvideo server manually with /etc/init.d/erlyvideo start. I try allready update-rc.d and I think erlyvideo by default should start automaticaly. Any help will be appreciated. Here erlyvideo startup script located in /etc/init.d/erlyvideo: #!/bin/sh ### BEGIN INIT INFO # Provides: erlyvideo # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the erlyvideo streaming server # Description: starts the erlyvideo using erlang system ### END INIT INFO case "$1" in start) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; stop) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; soft-restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; upgrade) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reconfigure) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reboot) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; ping) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; console) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach-erl) cd /opt/erlyvideo && ./erts-5.8.4/bin/erl -name [email protected] -remsh [email protected] ;; *) echo $"Usage: $0 {start|stop|restart|soft-restart|upgrade|reboot|ping|console|attach}" exit 1 esac exit 0 And I have found S91erlyvideo in /etc/rc2.d next to S91apache2 which starts just fine on every reboot.

    Read the article

  • pg_dump not working - do I need to change order of $PATH?

    - by A4J
    I'm trying to set the $PATH to pick up the latest version of pg_dump as I'm currently getting a mismatch error while doing a migrate in my Rails app (I recently changed the schema type to SQL). I have added a new file in /etc/profile.d called pg_dump.sh, and inside that put: PG_DUMP=/usr/pgsql-9.1 export PG_DUMP PATH=$PATH:$PG_DUMP/bin export PATH On looking at echo $PATH, I get: /usr/local/rvm/gems/ruby-1.9.3-p194/bin:/usr/local/rvm/gems/ruby-1.9.3-p194@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p194/bin:/usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/pgsql-9.1/bin:/root/bin And I still get the error. Do I need to change the order? If so any ideas how? Output of 'ls /usr/pgsql-9.1/bin': clusterdb droplang pg_archivecleanup pg_ctl pg_standby psql createdb dropuser pg_basebackup pg_dump pg_test_fsync reindexdb createlang ecpg pgbench pg_dumpall pg_upgrade vacuumdb createuser initdb pg_config pg_resetxlog postgres vacuumlo dropdb oid2name pg_controldata pg_restore postmaster And output of 'which pg_dump': /usr/bin/pg_dump Error message on running cap 'deploy:migrate': ** [out :: 46.4.9.199] pg_dump: server version: 9.1.4; pg_dump version: 8.4.11 ** [out :: 46.4.9.199] pg_dump: aborting because of server version mismatch ** [out :: 46.4.9.199] rake aborted! ** [out :: 46.4.9.199] Error dumping database output of 'pg_dump --version': pg_dump (PostgreSQL) 8.4.11

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • No video signal at boot with custom built computer

    - by Bart Pelle
    After booting my custom built computer, neither the VGA nor the HDMI methods from the video card seem to emit any signal to the display. I have tested both a regular VGA screen and a modern HDMI screen. Both did not receive signal. Below are the specifications from my computer build: Intel Core i5 3350P ASRock B75 Pro 3-M Seagate Barracuda 7200.14 ST1000 DM003 1000GB Corsair Vengeance LP CML 8GX 3M2 A1600 CGB Blue (2 cards) Cooler Master B Series B600 Club 3D Radeon HD7870 XT Jokercard Samsung SH-224 BB Black Sharkoon T28 Case The motherboard does not emit any beeps on startup. The CD tray opens properly and all fans spin. All cables are properly connected. All components are new and no damage was found on any of the components. The fans on the GPU spin aswell. The VGA test we did was by using the onboard graphics from the Intel i5, but this gave no result. The HDMI test was from the GPU which did not emit any signal either. We have not been able to test out the DVI, could this be important to test, even though all the other methods did not work? Thank you for your time and hopefully reply.

    Read the article

  • dpkg broken while upgrading Debian Etch to Lenny

    - by artvolk
    Good day! While trying to recover a box to lenny it seems I've broken things. It upgrades libc and glib after that dpkg seems to be broken. I can run apt-get, but it gets segmentation fault from dpkg: # apt-get -f install Reading package lists... Done Building dependency tree... Done 0 upgraded, 0 newly installed, 0 to remove and 316 not upgraded. 9 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. /bin/sh: line 1: 4606 Segmentation fault /usr/sbin/dpkg-preconfigure --apt E: Sub-process /usr/bin/dpkg received a segmentation fault. I can login via SSH but even ls is not working: # ls Segmentation fault Is there anything I can do remotelly via SSH? # ldd /bin/ls linux-gate.so.1 => (0xffffe000) librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0xb7fc8000) libacl.so.1 => /lib/libacl.so.1 (0xb7fc2000) libselinux.so.1 => /lib/libselinux.so.1 (0xb7fac000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7e51000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7e3f000) /lib/ld-linux.so.2 (0xb7fd8000) libattr.so.1 => /lib/libattr.so.1 (0xb7e3b000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7e37000) libsepol.so.1 => /lib/libsepol.so.1 (0xb7df6000) It seems I've temporary fixed it with: # touch /etc/ld.so.nohwcap From here: http://saintaardvarkthecarpeted.com/blog/archive/2005/08/_etc_ld_so_nohwcap.html

    Read the article

  • VPN sharing on Mac OS X 10.5 machine

    - by Jens
    I have a rather weird problem. I want to share a VPN connection that has been established by my Mac OS X 10.5 computer with another machine in my network. This is what I did: In the /etc/hostcongig file on the main computer I added the line: IPFORWARDING=-YES- I assigned a fixed IP address to my computer (192.168.178.30), a fixed one to the other machine (192.168.178.60) and my computer's IP address as gateway on the other machine. I connected to my VPN using the internal Mac OS X VPN client (PPTP connection) I run this script: #!/bin/sh natd -same_ports -use_sockets -unregistered_only -dynamic -interface ppp0 -clamp_mss ipfw -f flush ipfw add divert natd ip from any to any via ppp0 ipfw add pass all from any to any sysctl -w net.inet.ip.forwarding=1 Source: Using (and sharing) a VPN connection on your Mac Now everthing works smootly, however speed is an issue. I get 1,8 MBit/s on my main machine and only 0,3 - 0,6 MBit/s on the other one. My question: What could possibly be wrong? Do I have to tweak MTU settings, is there any packet inspection ongoing that needs time....? Any help appreciated!

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >