Search Results

Search found 42297 results on 1692 pages for 'run'.

Page 25/1692 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Can I run Ubuntu directly under Windows 8?

    - by huahsin68
    Text below is extract from the article, Windows 8 Tip: Virtualize with Hyper-V. Better still, Windows Virtual PC offered a feature called XP Mode, free for users of Windows 7 Professional, Enterprise, and Ultimate, which included a full working copy of Windows XP with Service Pack 3. But the big deal here is that as you installed applications in the virtual copy of XP, they would be made available through Windows 7’s Start Menu. And you could run these applications, side-by-side, with Windows 7 applications on the Windows 7 desktop. It was a seamless, integrated experience, ideal for those one-off application compatibility issues. I was thinking to install VirtualBox in Windows 8 and then run Ubuntu as guess OS. Since Hyper-V is a Type-0 hipervisor, may I know does this bring the same benefit if I have Ubuntu Linux install as a virtual guess OS? Meaning, if I turning the Ubuntu on (the guess OS), does the Ubuntu still able to access the hardware information like nVidia display card or processor information? I'm just curious to know can this be done?

    Read the article

  • Shell script only executes partially when run with CRON

    - by binaryorganic
    I've written a shell script that does the following: Retrieve mail from a POP3 account (using GetMail) Save a copy of that email to S3 (using AWS CLI) Email me the filesize of the email The script runs fine manually, and technically runs from CRON, but it only seems to be sending the email. The getmail and S3 bits don't seem to run. Everything I've read seems to hammer home the message that I need to be careful about relative paths and the like when using CRON, but I think I'm using absolute paths everywhere I need to be, so I'm stumped as to what the issue could be. My Shell Script is here: #!/bin/bash # Run GetMail getmail -r /PATH/TO/EMAIL/getmail.email # Save to S3 aws s3 cp /PATH/TO/SCRIPT/email-backup.mbox s3://XXXXXXXXXX/email-backup.mbox # Send Confirmation Email SUBJECT="EMAIL SUBJECT" EMAIL="[email protected]" # Get current filesize FILENAME=/PATH/TO/SCRIPT/email-backup.mbox FILESIZE=$(stat -c%s "$FILENAME") # Email Content EMAILMESSAGE="/tmp/emailmessage.txt" echo "EMAIL BODY" >$EMAILMESSAGE echo "" >>$EMAILMESSAGE echo "Current File Size: $FILESIZE bytes" >>$EMAILMESSAGE # Send the Mail /bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE

    Read the article

  • When to run SYSPREP

    - by haxel
    I am attempting to set up a computer using an answer file created by the WAIK and resetting the settings with SYSPREP and had a quick question. We are wanting to use Norton GHOST for our deployments. When do I run the sysprep? I figure it is after I get the computer all set up with the proper updates/software/drivers but I have not been able to find a direct answer to this online. Do i run it after everything is set up and the system is ready to be captured?

    Read the article

  • Schedule robocopy run

    - by xeonet
    I have Windows Server 2008 Enterprise. I need to copy files from network folder, I connected as a Z: drive. I need to schedule the copy. In scheduler I run it every 5 minutes. robocopy.exe Z:\ C:\destination /E I've tried to put it to .bat file, tried to write in scheduler, it doesn't help. I've set run with highest privilegies... Task Scheduler successfully completed task "\RoboCopy" , instance "{dd2d2d1c-4ef1-4e30-b226-4a77aa52dab9}" , action "C:\Windows\SYSTEM32\cmd.exe" with return code 16.

    Read the article

  • run ubuntu from virtual box in an external hard drive

    - by Bhavan
    I would like to run ubuntu on my external hard drive. I have got virtual box installed in the external hard drive and made a machine named ubuntu and installed the latest ubuntu version which i got from the .iso download. Now after much juggling around i got ubuntu running on the machine which i did the whole installation. When i move it to another machine the virtual box just woudnt open. What is the reason for this and how can i get the ubuntu run from whichever machine i plug in the external usb hard disk. Thanks a lot for your answers in advance. Best regards Bhavan

    Read the article

  • Ubuntu - executable file - variable assignment throwing error on script run

    - by newcoder
    I am trying to run a small script - test - on ubuntu box. It is as follows: var1 = bash var2 = /home/test/directory ... ... <some more variable assignments and then program operations here> ... ... Now every time I run it, then it throws errors: root@localhost#/opt/test /opt/test: line 1: var1: command not found /opt/test: line 3: var2: command not found ... ... more similar errors ... Can someone help me understand what is wrong in this script? Many thanks.

    Read the article

  • How to run script from root as another user (with user PATH)

    - by Sandra
    I would like to have these commands run as the ss user from root mkdir bin cp -r /opt/gitolite . gitolite/install -ln gitolite setup -pk ss.pub mkdir -p .gitolite/hooks/common ln -s /opt/pre-receive .gitolite/hooks/common/ so everything is executed in /home/ss. The 4th line requires $HOME/bin as you can see from the 3rd line. The only way I can get it to work is by adding su -c "command" ss to each line, which is not a nice hack. This is an extension to my previous question, where I wasn't precise enough. Question How do I run all these commands as a script in a practical way?

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • /etc/rc.local not being run on Ubuntu Desktop Install

    - by loosecannon
    I have been trying to get sphinx to run at boot, so I added some lines to /etc/rc.local but nothing happens when I start up. If i run it manually it works however. /etc/init.d/rc.local start works fine as does /etc/rc.local It's listed in the default runlevel and is all executable but it does not work. I am considering writing a separate init.d script to do the same thing but that's a lot of work for a simple task dumbledore:/etc/init.d# ls -l rc* -rwxr-xr-x 1 root root 8863 2009-09-07 13:58 rc -rwxr-xr-x 1 root root 801 2009-09-07 13:58 rc.local -rwxr-xr-x 1 root root 117 2009-09-07 13:58 rcS dumbledore:/etc/init.d# ls /etc/rc.local -l -rwxr-xr-x 1 root root 491 2011-05-14 16:13 /etc/rc.local dumbledore:/etc/init.d# runlevel N 2 dumbledore:/etc/init.d# ls /etc/rc2.d/ -l total 4 lrwxrwxrwx 1 root root 18 2011-04-22 18:53 K08vmware -> /etc/init.d/vmware -rw-r--r-- 1 root root 677 2011-03-28 15:10 README lrwxrwxrwx 1 root root 18 2011-04-22 18:53 S19vmware -> /etc/init.d/vmware lrwxrwxrwx 1 root root 18 2011-05-15 14:09 S20ddclient -> ../init.d/ddclient lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20fancontrol -> ../init.d/fancontrol lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S20kerneloops -> ../init.d/kerneloops lrwxrwxrwx 1 root root 27 2011-03-10 18:00 S20speech-dispatcher -> ../init.d/speech-dispatcher lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S25bluetooth -> ../init.d/bluetooth lrwxrwxrwx 1 root root 20 2011-03-10 18:00 S50pulseaudio -> ../init.d/pulseaudio lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50rsync -> ../init.d/rsync lrwxrwxrwx 1 root root 15 2011-03-10 18:00 S50saned -> ../init.d/saned lrwxrwxrwx 1 root root 19 2011-03-10 18:00 S70dns-clean -> ../init.d/dns-clean lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S70pppd-dns -> ../init.d/pppd-dns lrwxrwxrwx 1 root root 14 2011-05-07 11:22 S75sudo -> ../init.d/sudo lrwxrwxrwx 1 root root 24 2011-03-10 18:00 S90binfmt-support -> ../init.d/binfmt-support lrwxrwxrwx 1 root root 17 2011-05-12 21:18 S91apache2 -> ../init.d/apache2 lrwxrwxrwx 1 root root 22 2011-03-10 18:00 S99acpi-support -> ../init.d/acpi-support lrwxrwxrwx 1 root root 21 2011-03-10 18:00 S99grub-common -> ../init.d/grub-common lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99ondemand -> ../init.d/ondemand lrwxrwxrwx 1 root root 18 2011-03-10 18:00 S99rc.local -> ../init.d/rc.local dumbledore:/etc/init.d# cat /etc/rc.local #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. # Start sphinx daemon for rails app on startup # Added 2011-05-13 # Cannon Matthews cd /var/www/extemp /usr/bin/rake ts:config /usr/bin/rake ts:start touch ./tmp/ohyeah cd - exit 0 dumbledore:/etc/init.d# cat /etc/init.d/rc.local #! /bin/sh ### BEGIN INIT INFO # Provides: rc.local # Required-Start: $remote_fs $syslog $all # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: # Short-Description: Run /etc/rc.local if it exist ### END INIT INFO PATH=/sbin:/usr/sbin:/bin:/usr/bin . /lib/init/vars.sh . /lib/lsb/init-functions do_start() { if [ -x /etc/rc.local ]; then [ "$VERBOSE" != no ] && log_begin_msg "Running local boot scripts (/etc/rc.local)" /etc/rc.local ES=$? [ "$VERBOSE" != no ] && log_end_msg $ES return $ES fi } case "$1" in start) do_start ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop) ;; *) echo "Usage: $0 start|stop" >&2 exit 3 ;; esac

    Read the article

  • Monit won't run

    - by Yaniro
    I have two identical EC2 instances (the second is a replica of the first), running Gentoo. The first instance has monit running which monitors a single process and some system resources and functions great. In the second instance, monit runs but quits right away. The configuration is similar on both instances so are the versions of monit. monit.log shows: [GMT Oct 3 08:36:41] info : monit daemon with PID 5 awakened Final lines on strace monit show: write(2, "monit daemon with PID 5 awakened"..., 33monit daemon with PID 5 awakened ) = 33 time(NULL) = 1349252827 open("/etc/localtime", O_RDONLY) = 4 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 fstat64(4, {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb773a000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\0\0"..., 4096) = 118 _llseek(4, -6, [112], SEEK_CUR) = 0 read(4, "\nGMT0\n", 4096) = 6 close(4) = 0 munmap(0xb773a000, 4096) = 0 write(3, "[GMT Oct 3 08:27:07] info :"..., 33) = 33 write(3, "monit daemon with PID 5 awakened"..., 33) = 33 waitpid(-1, NULL, WNOHANG) = -1 ECHILD (No child processes) close(3) = 0 exit_group(0) = ? No core dumps (ulimit -c shows unlimited) monit -v shows: monit: Debug: Adding host allow 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Skipping redundant host 'localhost' monit: Debug: Adding credentials for user 'xxxx'. Runtime constants: Control file = /etc/monitrc Log file = /var/log/monit/monit.log Pid file = /var/run/monit.pid Id file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 30 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/monit with 100 slots Mail server(s) = xx.xxx.xx.xxx with timeout 30 seconds Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication and Host/Net allow list Alert mail to = [email protected] Alert on = All events The service list contains the following entries: System Name = xxxx Monitoring mode = active CPU wait limit = if greater than 20.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU system limit = if greater than 30.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert CPU user limit = if greater than 70.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Swap usage limit = if greater than 25.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Memory usage limit = if greater than 75.0% 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (5min) = if greater than 2.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Load avg. (1min) = if greater than 4.0 1 times within 1 cycle(s) then alert else if succeeded 1 times within 1 cycle(s) then alert Process Name = xxxx Group = server Pid file = /var/run/xxxx.pid Monitoring mode = active Start program = '/etc/init.d/xxxx restart' timeout 20 second(s) Stop program = '/etc/init.d/xxxx stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Timeout = If restarted 3 times within 5 cycle(s) then unmonitor Alert mail to = [email protected] Alert on = All events Alert mail to = [email protected] Alert on = All events ------------------------------------------------------------------------------- monit daemon with PID 5 awakened Ran emerge --sync before emerge -va monit which installed monit v5.3.2. When that didn't work i've downloaded v5.5 from their website and compiled from source which did not work either.

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Squid external_acl_type Cannot run process

    - by Alex Rezistorman
    I want to restrict uploading for group of the users via squid. So I've choosen to use external_acl_type but after reload of the squid it returns error. WARNING: Cannot run '/usr/local/etc/squid/lists/newupload.sh' process. Permissions of newupload.sh and squid are the same. newupload.sh is executive. How can I solve this problem? Thnx in advance. newupload.sh #!/bin/sh while read line; do set -- $line length=$1 limit=$2 if [ -z "$length" ] || [ "$length" -le "$2" ]; then echo OK else echo ERR fi done Strings from squid.conf external_acl_type request_body protocol=2.5 %{Content-Lenght} /usr/local/etc/squid/lists/newupload.sh acl request_max_size external request_body 5000 http_access allow users request_max_size Squid version squid -v Squid Cache: Version 3.2.13 configure options: '--with-default-user=squid' '--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--with-swapdir=/var/squid/cache/squid' '--enable-auth' '--enable-build-info' '--enable-loadable-modules' '--enable-removal-policies=lru heap' '--disable-epoll' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-translation' '--enable-auth-basic=PAM' '--disable-auth-digest' '--enable-external-acl-helpers= kerberos_ldap_group' '--enable-auth-negotiate=kerberos' '--disable-auth-ntlm' '--without-pthreads' '--enable-storeio=diskd ufs' '--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped' '--enable-log-daemon-helpers=file' '--disable-url-rewrite-helpers' '--disable-ipv6' '--disable-snmp' '--disable-htcp' '--disable-forw-via-db' '--disable-cache-digests' '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups' '--disable-eui' '--disable-ipfw-transparent' '--disable-pf-transparent' '--disable-ipf-transparent' '--disable-follow-x-forwarded-for' '--disable-ecap' '--disable-icap-client' '--disable-esi' '--enable-kqueue' '--with-large-files' '--enable-cachemgr-hostname=proxy.adir.vbr.ua' '--with-filedescriptors=131072' '--disable-auto-locale' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'CPP=cpp' --enable-ltdl-convenience Related post: Restrict uploading for groups in squid http://squid-web-proxy-cache.1019090.n4.nabble.com/flexible-managing-of-request-body-max-size-with-squid-2-5-STABLE12-td1022653.html

    Read the article

  • How to run multiple instances of Tor?

    - by Ed
    I'm trying to set up a special proxy server (running Windows). It will have several instances of Privoxy and Tor running and my app will choose which Privoxy instance to send HTTP requests to depending on the load. Privoxy will then forward them to Tor. I'm using srvany.exe to create the services. At the moment I'm running 3 Privoxy and 3 Tor services (I copied the binaries to different folders). Each Privoxy service is listening to its own port (8118, 8119, 8120). I can see them listening in a port scanner. This is the application path (for srvany in registry) for the 1st service: C:\Anonymiser\Privoxy 01\privoxy.exe --service I've also configured the Tor services to listen to different ports (9050, 9052, 9054). This is the application path for the 1st service: C:\Anonymiser\Tor 01\tor.exe -f "C:\Anonymiser\Tor 01\torrc" The problem is, when I start the Tor services, only the first service I start is listening to its port. The others aren't listening. They listen if I run them separately. Any ideas what could be wrong? How can I make all 3 services listen on their assigned ports? This is one of my Privoxy configs: confdir . logdir . logfile privoxy.log debug 1 # show each GET/POST/CONNECT request debug 4096 # Startup banner and warnings debug 8192 # Errors - we highly recommended enabling this listen-address localhost:8118 toggle 0 enable-remote-toggle 0 enable-remote-http-toggle 0 enable-edit-actions 1 buffer-limit 4096 forwarded-connect-retries 0 forward-socks4a / localhost:9050 . This is one of my Tor configs: ControlPort 9051 Log notice stdout SocksListenAddress localhost SocksPort 9050 EDIT: Found a workaround. The Tor binary wants a lock on a file in the AppData folder. Because all of them want a lock on the same file, only the first one I start will be working. The workaround is to run each Tor instance under a different account. Not the best solution, but it works.

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • Windows 7: Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly? Thanks!

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • script to run sox to combine multiple mono tracks to stereo

    - by Ze'ev
    I have a folder full of .wav audio files. Some are stereo, most are mono splits. The mono split pairs are all named foo bar track.L.wav and foo bar track.R.wav I can use the command line tool sox to combine a mono pair into 1 stereo track like this: sox -M track1.L.wav track1.R.wav track1.Stereo.wav where the first 2 files are the mono pairs, and the third is the output stereo file. This is great, but I'd like to have a script that will automatically find all the mono pairs and combine them into stereo files. I.e., I need it to find all files which have the same name except for the .L. and .R. before the extension, and run sox on them, outputting to a new file with the same name without the L/R suffix. For example, if my folder contains these files: track1.L.wav track2.L.wav track3.L.wav track4.L.wav track1.R.wav track2.R.wav track3.R.wav track4.R.wav track6.wav track7.wav I need to run these commands: sox -M track1.L.wav track1.R.wav track1.Stereo.wav sox -M track2.L.wav track2.R.wav track2.Stereo.wav sox -M track3.L.wav track3.R.wav track3.Stereo.wav sox -M track4.L.wav track4.R.wav track4.Stereo.wav Here's where I am so far: for file in ./*.L.wav; do file2=`echo $file | sed 's_\(.*\).L.wav_\1.R.wav_'`; out=`echo $file | sed 's_\(.*\).L.wav_\1.STEREO.wav_'`; echo $file - $file2 - $out; done That works, but when I replace the echo line with sox -M $file $file2 $out; it doesn't work; spaces in the filenames cause it to fail.

    Read the article

  • On Windows XP, Start > Run "My Documents" sometimes doesn't work

    - by Clayton Hughes
    On all of my home computers, I can enter "my documents" into the Start Run prompt and the My Documents folder of the current profile will open up. What's more, I can continue typing subfolders, files, etc. and auto-complete works and it's smart and enjoyable. I can't check at the moment, but I'm almost positive entries like "My Pictures" and "My Music" also go to their correct folders. On my work computers, if I enter "my documents" into the Start Run prompt, I get the following error: "Windows cannot find 'my'. Make sure you typed the name correctly, and then try again. To search for a file, click the Start Button, and then click Search." I can sort of circumvent this by creating a shortcut in my PATH named 'my' that points to My Documents folder, but this doesn't solve the auto-complete option (and it's otherwise imperfect, of course, because "my pictures" or "my music" all direct to the same place. A google search doesn't provide much help on this, although it does identify a poster in 2007 with this same question at another board: http://www.msfn.org/board/lofiversion/index.php/t124813.html (Login required, but Google cache available here: http://preview.tinyurl.com/ygxhwwl) Is this just a limitation of the networks belonging to a domain, or is there some way I can get this functionality back? My documents folder does live in the standard place (C:\Documents and Settings{username}\My Documents), and not on a network drive or anything. It's probably worth adding that the computers are part of some freakish Novell domain thing, too. I'm not in IT here so I'm not too up on the details. Thanks for any help/suggestions!

    Read the article

  • mac terminal run a file of commands

    - by Ilan Tal
    I am coming from Linux and trying to get a Mac to do what I want it to do. The question is what is the best tool to use. I want to mount (unmount) several remote disks. If I go into a terminal I can do the trick by mount -t smbfs //username:pass@addr /Users/me/RemoteDisks/mnt1 Since I want to mount several disks I would like to put all of the information into a file, store it in Documents/subfolder and make a link to it on the desktop (or somewhere better, if there is a better place). At the moment I have manually run the appropriate command in the terminal and the remote disk is mounted and I see its contents. What I need is a one click method to run a file to mount all the disks. I tried Apple script but that didn't like my commands. I don't know exactly what it is expecting to see and perhaps Apple script is the wrong tool. I have no problems in Linux, but the Mac is new to me and I don't know what I should be using. Thanks, Ilan

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • Windows Virtual Machines will not run

    - by jlego
    I'm trying to setup a few virtual machines to use for testing websites in the various old versions of IE. I had Microsoft Virtual PC working on an older machine using XP mode and 2 other VHD's from Microsoft that allowed me to test in IE6-IE8. I've recently gotten a new work machine and am trying to set up the VMs again for testing, however nothing seems to be working. Both the old and the new system run Windows 7 64-bit Ultimate with AMD processors. I downloaded Virtual PC & XP mode from here http://www.microsoft.com/windows/virtual-pc/download.aspx and go through the installation process. XP mode is installed, but when I try to run it it goes through the initial setup process only to fail when it is almost complete with the error "Cannot Complete Setup". (After googling I see that this might be a conflict with my processor) I download other VHD's from here http://www.microsoft.com/windows/virtual-pc/download.aspx in order to get the other versions of IE and try to set those up in Virtual PC as well. I click on them to start the machine and both Windows 7 with IE8 and Windows Vista with IE7 just hang at a black screen. I try to use Virtual Box instead, and I get Windows XP with IE6 running, but I have no internet connection in the VM. I try all different settings and try to google the correct settings but nothing seems to work. When I load the VM, XP shows that its found new hardware but it needs the drivers. One of these pieces of hardware is the network adapter, but I can't connect to the internet to download the driver in the guest OS. VirtualBox tells me I need to install extensions in order for things to function properly. I go through the installation process in the guest OS and restart the VM, however now XP is asking for validation and I can't access the VM. I try installing the other 2 OS (Vista & 7) but I get a BSOD right after the startup screen appears and the VM restarts itself. I'm getting so frustrated trying to make this work, I would really appreciate any assistance on getting the VMs up and running or any alternatives for testing websites in Internet Explorer.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >