Search Results

Search found 3548 results on 142 pages for 'unix'.

Page 43/142 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Encoding with FFmpeg using a FIFO

    - by Ashot Martirosyan
    Hello everyone. I'm trying to convert Flac audio file to AAC file using command line. So I wrote this ffmpeg -i input.flac temp.wav faac -q 120 -o output.m4a temp.wav It's working fine. Now I want to do the same using fifo, so I'm writing this mkfifo temp.wav ffmpeg -i input.flac temp.wav & faac -q 120 -o output.m4a temp.wav And it's freezing. So could you tall me what I'm doing wrong. Thanks a lot, and sorry for my English.

    Read the article

  • What does directory permission 'S' mean? (not lower case, but in upper case)

    - by Howard Guo
    I downloaded Eclipse, uncompressed it, did a few other things and all sudden I notice this interesting behaviour: ^_^ ~/Downloads > sudo chmod 0000 eclipse/ ^_^ ~/Downloads > stat eclipse/ File: 'eclipse/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 529725 Links: 9 Access: (2000/d-----S---) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-11-22 19:54:57.752017352 +1100 Modify: 2012-09-20 18:16:26.000000000 +1000 Change: 2012-11-22 20:07:49.354016510 +1100 Birth: - ^_^ ~/Downloads > sudo chmod 0755 eclipse/ ^_^ ~/Downloads > stat eclipse/ File: 'eclipse/' Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 801h/2049d Inode: 529725 Links: 9 Access: (2755/drwxr-sr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-11-22 19:54:57.752017352 +1100 Modify: 2012-09-20 18:16:26.000000000 +1000 Change: 2012-11-22 20:08:19.042016478 +1100 Birth: - What does 'S' permission mean to a directory? And why it doesn't let me get rid of it? Thanks.

    Read the article

  • RHEL 5 list missing critical patches/packages

    - by Vinnie Biros
    Im trying to figure out if there is an easy way to identify the missing critical patches/packages on my RHEL5 boxes. This is for audit purposes and was trying to figure out if there was an RPM command or something of the sort that would accomplish this easily. I know with my Solaris 10 boxes, i can run the "smpatch analyze" command which would display this information for me. Anyone know of anything similar for RHEL5? Thanks.

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • In linux: how to exit a program but not kill it?

    - by biomed
    I use Ubuntu 10.10 and I have a python program (mnemosyne) that I synchronize the data files using dropbox. If I forget to close (exit) this program. Here is my problem scenario. I leave the program running at home and go to work but if I open the program at work and work on it the data file is changed and I loose my progress at home when I exit (it automatically saves) when exitimg. I thought I could create a cron job to automatically close mnemosyne every morning regardless os me remembering to do it or not but if I use kill the program exits without saving the datafile and I end up with a tmp file and an error message when I restart it. Is there a better way of sending the exit signal to this program emulating me clicking fileexit menu option. Thanks

    Read the article

  • Linux released memory

    - by user59088
    If My process allocates some big memory and then deallocates, would top or gnome-system-monitor show that my memory usage of that process decreased ? or kernel will still reserve that memory for that process ? What I see is I am deallocating memory. But I still see gnome-system-monitor displaying growing memory for my program. I don't find memory leak in my end. I want to know whether its not displaying released memory ? or there is really a memory leak at my end ?

    Read the article

  • Cron process not starting

    - by vkris
    I have an ec2 image created with cron jobs. These jobs fail to run; I discovered the cron process in itself has not started. So, I included /usr/sbin/cron in /etc/rc.d/rc.local and created another image. But still for some reason the cron process does not start on bootup. If I restart the machine, the cron process runs. It doesn't run when it boots up! Any reason why this is happening? Also, is there any other alternatives for this ?

    Read the article

  • Bit-shifting a file

    - by mykhal
    I wonder if there is an utility to read and print a (binary) file, shifted by some amount of bits (i mean, it should accept amounts, which are not divisible by 8). .. something like dd (and its skip option), but bit-wise, instead of byte-wise. (If you think that there is no such thing, and are going to implement it here, please use C.. i have my own bit-shifting thing for strings, written in Python, but it is surely relatively slow as hell)

    Read the article

  • Cant remove/delete symlink

    - by user477519
    I have tried to create a symlink and it threw this error: ln: accessing `.test': Permission denied Now I can't unlink or delete the symlink file. Tried Googling for help but could not find a solution. Please find the results of following commands. stat .test : File: `.test'stat: cannot read symbolic link `.test': Permission denied Size: 26 Blocks: 0 IO Block: 16384 symbolic link Device: 1fh/31d Inode: 312075453 Links: 1 Access: (0777/lrwxrwxrwx) Uid: (11160/ chatt) Gid: (11307/ pgr) Access: 2012-11-12 11:36:51.167327500 +0000 Modify: 2012-11-12 11:36:51.163331700 +0000 Change: 2012-11-12 11:36:51.163331700 +0000 Birth: - chattr -i .test: chattr: Permission denied while trying to stat .test lsatter .test lsattr: Operation not supported While reading flags on .test Any help would be appreciated. Thanks

    Read the article

  • GNU Screen: one window per screen or one screen with multiple windows?

    - by yalestar
    I've inherited a few sys admin tasks recently and am trying to wrap my head around using screen. The way the previous guy left it, there are four screen sessions running, some of which have two or three windows running within. It doesn't appear that he was using any particular convention, so I ask you: Is it better to have each process in its own screen session, or better to group similar processes into a single screen? Or something different entirely?

    Read the article

  • What do these acronyms stand for ?

    - by Luc M
    Some directories are easy to understand the meaning /usr /bin ... But for the next ones, I have no idea. /etc /opt opt for optionnal ? etc for electronic t...... configuration (no idea for t) I would like to know what these acronyms are meaning

    Read the article

  • BASH - Run command for each line in output of previous command

    - by user1582375
    All, I am want to request all network services using: networksetup -listallnetworkservices I then want to run the below command for each line in produced from the above command: networksetup -setautoproxyurl "A LINE FROM ABOVE" http://etc... Adiitonally, I only want to issue the setautoproxyurl command for service with "Ethernet" or "Wi-Fi" in the name networksetup -listallnetworkservices | while read line; do networksetup -setautoproxy $line http://etc...

    Read the article

  • Title: Better logging for cronjob output

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally.

    Read the article

  • Help updating cron entry using regular expressions

    - by Uday
    hi I am trying to update a cron entry NOT by using crontab -e but by shell commands. For example the cron entry is like this: 10 * * * * /home/localuser/foo.sh -b 1 -h 4 > foo_output.sh 2>&1 No i need edit the command line parameters part ONLY i.e -b 1 -h 4 to something else which will be coming in from the user. First thing would be to write the crontab to a tmp file and then manipulate that temp file. Now, is there an easy way to edit that line using SED or something? The crude way wud be to delete that entire line, write a new line with the entire expression and then load that into the cron. I am not very good with regular expressions. My system supports sed -i so was thinking this could be done in a single line command. Thanks in advance

    Read the article

  • Logging commands executed by remote shell scripts

    - by user145836
    I've noticed that when running a script that connects to a number of our servers (to essentially run batch commands) that the commands aren't logged in the user's .sh_history or .bash_history files. Is there a place where this is logged (assuming the script itself isn't doing the logging and I'm not tee'ing the output anywhere)? I'm talking specifically about AIX, but I would assume this question applies to all the *nix flavors. Thanks!

    Read the article

  • After setting ulimit to unlimited, I am not able to login to machine

    - by user419534
    In one of requirment, I had to set ulimit on one of my machine to unlimited. For this I changed following in /etc/security/limits.conf as below # End of file oracle soft nofile unlimited oracle hard nofile unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 * soft nofile unlimited * hard nofile unlimited and changed /etc/profile if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p unlimited ulimit -n unlimited else ulimit -u unlimited -n unlimited fi fi I logged out. I am not able to connect ot machine at all. could you please someone help on this.

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • determine the archetecture of a mac from the command line or script?

    - by Brian Postow
    I'm writing a shell script, and I need to know the archetecture, ie PPC or Intel. Back in the day, there was a program /bin/arch that told you, but my mac doesn't seem to have it... Is there an easy way I can do this? Grep for something in a logfile? call some other program that spits that out as a side effect? It would be nice to know what OS Version I'm running too, but that may not be necessary. thanks

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >