Search Results

Search found 721 results on 29 pages for 'stdout'.

Page 23/29 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • Run FTP session from bash script

    - by Adam Salkin
    I'm trying to write a BASH script to test if an FTP site that I own is running. I therefore want the bash script to connect to the FTP site, log in with a dummy account and redirect the output to a file that I can then grep to confirm that the login succeeded. (I know that putting user/pass in a file is not recommended, but this dummy account is chrooted to one empty directory and can't escape to the shell, and in any case I'm the only user who can login to a shell prompt.) I'm using the BASH shell on Ubuntu. I created a file called "ftp-dummy" which looks like this username password And I then did this from the prompt: adam$ ftp my.ftpsite.com < ftp-dummy This does not work - I don't see the normal welcome message and the output is: Password:Name (my.ftpsite.com:adam) : I tried removing the space between the < and the filename - same result. If I redirect the output to a testfile, the testfile shows: Name (my.ftpsite.com:adam): ?Invalid command And I still get a Password prompt on STDOUT I also tried using echo and get the same result: echo -e "username \npassword \n" | ftp my.ftpsite.com I don't see why I'm not seeing the normal welcome message or why the input is not being read from the file. Any help would be much appreciated. Thanks, Adam

    Read the article

  • Powershell: Execute exe on remote server and capture output

    - by user364825
    I am trying to script the execution of an installer on remote web servers. The installer in question is also a Windows Service that hosts NServiceBus. If RDP'd into the server, the application is installed by the following command: &"$theInstaller" /install /serviceName:TheServiceName The installer prints output about its progress registering the service and connecting to the database to stdout, among other things. This works fine from an RDP session, but when I execute it remotely via PS, I get a you-can't-do-this-over-the-network message if I execute it directly or via Invoke-Command -computername $theRemoteServer: System.IO.FileLoadException: Could not load file or assembly 'file://\\theRemoteServer\c$ \thePath\AutoMapper.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. (Note: I added an additional "\" to the path in the first line in order to get it to show up correctly in the preview on this site.) This, and other DLLs, are loaded by the service, and the service's execution context cannot, apparently, be remotified. I have also tried using Invoke-WmiMethod, which does something, but it's not clear what, and the output from the installer is lost: Invoke-WMIMethod win32_process create '"$theInstaller" /install /serviceName:TheServiceName' -ComputerName $server (with and without cmd.exe /k before the intaller reference): __GENUS : 2 __CLASS : __PARAMETERS __SUPERCLASS : __DYNASTY : __PARAMETERS __RELPATH : __PROPERTY_COUNT : 2 __DERIVATION : {} __SERVER : __NAMESPACE : __PATH : ProcessId : ReturnValue : 9 How does one remotely execute such an EXE and capture the output? Thanks!

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Nice level not working on linux

    - by xioxox
    I have some highly floating point intensive processes doing very little I/O. One is called "xspec", which calculates a numerical model and returns a floating point result back to a master process every second (via stdout). It is niced at the 19 level. I have another simple process "cpufloattest" which just does numerical computations in a tight loop. It is not niced. I have a 4-core i7 system with hyperthreading disabled. I have started 4 of each type of process. Why is the Linux scheduler (Linux 3.4.2) not properly limiting the CPU time taken up by the niced processes? Cpu(s): 56.2%us, 1.0%sy, 41.8%ni, 0.0%id, 0.0%wa, 0.9%hi, 0.1%si, 0.0%st Mem: 12297620k total, 12147472k used, 150148k free, 831564k buffers Swap: 2104508k total, 71172k used, 2033336k free, 4753956k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32399 jss 20 0 44728 32m 772 R 62.7 0.3 4:17.93 cpufloattest 32400 jss 20 0 44728 32m 744 R 53.1 0.3 4:14.17 cpufloattest 32402 jss 20 0 44728 32m 744 R 51.1 0.3 4:14.09 cpufloattest 32398 jss 20 0 44728 32m 744 R 48.8 0.3 4:15.44 cpufloattest 3989 jss 39 19 1725m 690m 7744 R 44.1 5.8 1459:59 xspec 3981 jss 39 19 1725m 689m 7744 R 42.1 5.7 1459:34 xspec 3985 jss 39 19 1725m 689m 7744 R 42.1 5.7 1460:51 xspec 3993 jss 39 19 1725m 691m 7744 R 38.8 5.8 1458:24 xspec The scheduler does what I expect if I start 8 of the cpufloattest processes, with 4 of them niced (i.e. 4 with most of the CPU, and 4 with very little)

    Read the article

  • How to start a s3ql script automatically on boot?

    - by ks78
    I've been experimenting with s3ql on Ubuntu 10.04, using it to mount Amazon S3 buckets. However, I'd really like it to mount them automatically. Does anyone know how to do that? I've been working on a script, which works when its run from from the commandline, but for some reason I can't get it to run automatically on boot. Does anyone have any ideas? Here's my script: #! /bin/sh # /etc/init.d/s3ql # ### BEGIN INIT INFO # Provides: s3ql # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO case "$1" in start) # Redirect stdout and stderr into the system log DIR=$(mktemp -d) mkfifo "$DIR/LOG_FIFO" logger -t s3ql -p local0.info < "$DIR/LOG_FIFO" & exec > "$DIR/LOG_FIFO" exec 2>&1 rm -rf "$DIR" modprobe fuse fsck.s3ql --batch s3://mybucket exec mount.s3ql --allow-other s3://mybucket /mnt/s3fs ;; stop) umount.s3ql /mnt/s3fs ;; *) echo "Usage: /etc/init.d/s3ql{start|stop}" exit 1 ;; esac exit 0

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/[email protected] mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • Supervisord doesn't stop nginx process

    - by Lennart Regebro
    I'm using Supervisor a lot, and in this project I have an nginx process managed by Supervisord. The relevant parts of the configuration is this: [supervisord] logfile=/home/projects/eceee-web/prod/var/log/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/home/projects/eceee-web/prod/var/supervisord.pid ; childlogdir=/home/projects/eceee-web/prod/var/log nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) directory=/home/projects/eceee-web/prod [program:nginx] command = /home/projects/eceee-web/prod/bin/nginx redirect_stderr = true autostart= true autorestart = true directory = /home/projects/eceee-web/prod stdout_logfile = /home/projects/eceee-web/prod/var/log/nginx-stdout.log stderr_logfile = /home/projects/eceee-web/prod/var/log/nginx-stderr.log The /home/projects/eceee-web/prod/bin/nginx command will start nginx in the foreground, it does not deamonify itself. Still, stopping it will fail: supervisorctl stop nginx Will not give any answer, but the process will continue. Any idea what? This is on OS X Darwin, with Supervisor 3.0a9 and nginx 0.7.65.

    Read the article

  • Weird mouse behaviour. Debian wheezy

    - by DevNoob
    When I move my mouse slowly over the desktop the pointer jumps often a few pixels (one or two) in the opposite direction of which I move my mouse. Horribly when trying to set the cursor around some semicolons in eclipse. I guess this is the result of a wrong set resolution of it. I suppose this is because the mouse was set initially really fast and even if I do xset 1/2 3, the mouse is just to fast and unprecise for me. It aready tried to configure the xorg.conf like this: Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" Option "Device" "/dev/mouse" Option "Protocol" "Auto" Option "Name" "Logitech G3" Option "Resolution" "2000" EndSection But with no effect. Maybe because there is no /dev/mouse. This ist the content of dev. Maybe you can tell me which one is the mouse. autofs block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dvd dvdrw fd fd0 full fuse fw0 hidraw0 hidraw1 hpet input kmsg log loop0 loop1 loop2 loop3 loop4 loop5 loop6 loop7 loop-control MAKEDEV mapper mcelog mem net network_latency network_throughput null nvidia0 nvidiactl oldmem port ppp printer psaux ptmx pts random rfkill root rtc rtc0 sda sda1 sda2 sda3 sda5 sda6 sda7 sda8 sdb sdb1 sg0 sg1 sg2 shm snapshot snd sndstat sr0 stderr stdin stdout tty tty0 tty1 tty10 tty11 tty12 tty13 tty14 tty15 tty16 tty17 tty18 tty19 tty2 tty20 tty21 tty22 tty23 tty24 tty25 tty26 tty27 tty28 tty29 tty3 tty30 tty31 tty32 tty33 tty34 tty35 tty36 tty37 tty38 tty39 tty4 tty40 tty41 tty42 tty43 tty44 tty45 tty46 tty47 tty48 tty49 tty5 tty50 tty51 tty52 tty53 tty54 tty55 tty56 tty57 tty58 tty59 tty6 tty60 tty61 tty62 tty63 tty7 tty8 tty9 ttyS0 ttyS1 ttyS2 ttyS3 uinput urandom usb vcs vcs1 vcs2 vcs3 vcs4 vcs5 vcs6 vcs7 vcsa vcsa1 vcsa2 vcsa3 vcsa4 vcsa5 vcsa6 vcsa7 vga_arbiter vmci vmmon vmnet0 vmnet1 vmnet8 vsock watchdog xconsole zero So my question is: How do I setup my mouse correctly in Debian wheezy?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • Why is cron mailing me program output even though I've redirected to /dev/null?

    - by Server Fault
    I'm trying to restart a system process through cron and getting emailed the startup output of the process. I thought redirecting STDOUT and SDTERR to /dev/null would "silence" the output but alas, this has not work. How can I get cron to silently restart this service? crontab entry: 0 6 * * * service sympa stop &>/dev/null; service sympa start &> /dev/null sample output from restart email: Stopping Sympa bounce manager bounced ...done. * Stopping Sympa task manager task_manager ...done. * Stopping Sympa mailing list archive manager archived ...done. * Stopping Sympa mailing list manager sympa ...done. ... waiting Prototype mismatch: sub Lock::LOCK_SH () vs none at /home/sympa/bin/Lock.pm line 38. Constant subroutine LOCK_SH redefined at /home/sympa/bin/Lock.pm line 38. Prototype mismatch: sub Lock::LOCK_EX () vs none at /home/sympa/bin/Lock.pm line 39. Constant subroutine LOCK_EX redefined at /home/sympa/bin/Lock.pm line 39. Prototype mismatch: sub Lock::LOCK_NB () vs none at /home/sympa/bin/Lock.pm line 40. Constant subroutine LOCK_NB redefined at /home/sympa/bin/Lock.pm line 40.

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • How does rsync --daemon know which way it is being run?

    - by Skaperen
    I am wanting to run rsync over an SSL/TLS encrypted connection. It does not do this directly so I am exploring options. The stunnel program looks promising, although more complicated than designed due to the need to hop connections with the -r option. However, I do find there is a -l option to run a program. I am assuming this works by having two processes, one to carry out the SSL/TLS work, and one to be the worker which the client is communicating to. These would then communicate by a pipe pair or two way socket between them. What struck me as odd when I surveyed a number of web pages to see how to properly set this up is that whether running as a standalone daemon, or under a super daemon like inetd, the arguments for rsync are the same. How does rsync --daemon know whether it should open a socket and listen on it for many connections, or just service one connection by communicating with the stdin/stdout descriptors is has when it starts up (which really would go through the extra process to handle the encryption, description, and SSL/TLS protocol layer)? And then I need to find a way to wrap the client to have it do SSL/TLS in one simple command (as opposed to connection hopping that stunnel seems to favor).

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • How to search file for matching whole lines?

    - by WilliamKF
    I have a command which sends to stdout a series of numbers, each on a new line. I need to determine whether a particular number exists in the list. The match needs to be exact, not a subset. For example, a simple way to approach this that does not work would be to do: /run/command/outputing/numbers | grep -c <numberToSearch> My this gives a false positive on the following list when searching for '456': 1234567 98765 23 1771 If the count is non-zero, a match was found or if it was zero, the number is not in the list. The issue with this is that the numberToSearch could match a subsequence of numbers on a line, instead I only want hits on the whole line. I looked at the man page for grep and did not see any way to only match whole lines. Is there a way to do this, or would I be better off using awk or sed or some other tool instead? I need a binary answer as to whether the number being search for is present or not.

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • Unable to create files in a directory

    - by vamsi360
    I have created a directory in Virutalbox. Using VBoxManage, I am executing a script inside the Ubuntu VM directory I created above from Ubuntu host OS. But if the script in the VM contains commands for creating a new file, they are not executing. "echo" commands before and after the touch ommand are working fine. I even used root user for VBoxManage to install. I think the directory is not allowing the files to be created . How can I make a directory in Linux to be 777 to all new files created automatically. I mean, even if I make the directory (chmod 777 dir), I am unable to execute the script from the host. Please help. It may be simple permissions problem. Even root is unable to execute. VBoxManage guestcontrol "Ubuntu_10_04" execute --image "/bin/bash" "/home/cloudlet/Desktop/temp2/three" --username root --password root --verbose --wait-exit --wait-stdout -- -l /usr Please help. I am struggling with this problem for the past one week.

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29  | Next Page >