Search Results

Search found 5228 results on 210 pages for 'bash alias'.

Page 80/210 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Running lame from php

    - by gok
    I am trying to run lame from a php script. I have tried these, but no luck, I don't get anything returned! Any ideas? system('lame', $returnarr); system('lame --help', $returnarr); exec('lame', $returnarr); passthru('lame', $returnarr); even this one returns nothing: exec('which lame', $returnarr); I am on OSX and final deployment will be on Linux. Do you have better suggestions for an automated wav-mp3 conversion? From php, should I execute a bash script that executes Lame?

    Read the article

  • How to execute with /bin/false shell

    - by Amar
    I am trying to setup per-user fastcgi scripts that will run each on a different port and with a different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I don't want users to have shell access), the script is run under 1001, 1002 'user' which, as my Google searches showed, might be a security hole. My question is: Is it possible to allow user(s) to execute shell scripts but disable them so they cannot log in via SSH?

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • How te execute with /bin/false shell

    - by Amar
    Hello I am trying to setup per-user fastcgi scripts that will run each on different port and with different user. Here is example of my script: #!/bin/bash BIND=127.0.0.1:9001 USER=user PHP_FCGI_CHILDREN=2 PHP_FCGI_MAX_REQUESTS=10000 etc... However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I dont want users to have shell access), the script is run'd under 1001, 1002 'user' which, as I googled, might be security hole. My question is: Is it possible to allow user(s) execute shell scripts but disable them to log in via SSH ? Thank you

    Read the article

  • Sed does not work in expect

    - by Sharjeel Sayed
    I made this bash one-liner which I use to list Weblogic instances running along with their full paths.This works well when I run it from the shell. /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort I tried to incorporate this in an expect script send "echo Weblogic Processes: ; /usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print \$2}' | sed 's/weblogic.policy//' | sed 's/security\///' | sort ; echo ; echo\r" but I got this error sed: -e expression #1, char 13: unknown option to `s' Please help

    Read the article

  • AWK If/ElseConditional Problem

    - by neversaint
    I have a data that looks like this: foo foo scaffold_7 1 4845 6422 4845 bar bar scaffold_7 -1 14689 16310 16310 What I want to do is to process the above lines where I just want to print column 1,2,3, 7 and one more column after 7th. But with condition when printing column 7 onwards. Below is my awk script: awk '{ if ($4=="+") { {end=$6-$5}{print $1 "\t" $2 "\t" $3 "\t" $4 "\t" $7 "\t" end+$7} } else {end=$6-$5}{print $1 "\t" $2 "\t" $3 "\t" $4 "\t" $7-end "\t" $7} }' But why it doesn't achieve the desired result like this? foo foo scaffold_7 1 4845 6422 bar bar scaffold_7 -1 14689 16310 Note that the arithmetic (e.g. $7-end or end+$7) is a must. So we can't just swap column from input file. Furthermore this AWK will be inside a bash script.

    Read the article

  • Getting ID of an instance newly launched with ec2-api-tools

    - by Jonik
    I'm launching an EC2 instance, by invoking ec2-run-instances from simple a bash script, and want to perform further operations on that instance (e.g. associate elastic IP), for which I need the instance id. The command is something like ec2-run-instances ami-dd8ea5a9 -K pk.pem -C cert.pem --region eu-west-1 -t c1.medium -n 1, and its output: RESERVATION r-b6ea58c1 696664755663 default INSTANCE i-945af9e3 ami-dd8ea5b9 pending 0 c1.medium 2010-04-15T10:47:56+0000 eu-west-1a aki-b02a01c4 ari-39c2e94d In this example, i-945af9e3 is the id I'm after. So, I'd need a simple way to parse the id from what the command returns - how would you go about doing it? My AWK is a little rusty... Feel free to use any tool available on a typical Linux box. (If there's a way to get it directly using EC2-API-tools, all the better. But afaik there's no EC2 command to e.g. return the id of the most recently launched instance.)

    Read the article

  • Converting FASTQ to FASTA with SED/AWK

    - by neversaint
    I have a data in that always comes in block of four in the following format (called FASTQ): @SRR018006.2016 GA2:6:1:20:650 length=36 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN +SRR018006.2016 GA2:6:1:20:650 length=36 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!+! @SRR018006.19405469 GA2:6:100:1793:611 length=36 ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC +SRR018006.19405469 GA2:6:100:1793:611 length=36 7);;).;);;/;*.2>/@@7;@77<..;)58)5/>/ Is there a simple sed/awk/bash way to convert them into this format (called FASTA): >SRR018006.2016 GA2:6:1:20:650 length=36 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNGN >SRR018006.19405469 GA2:6:100:1793:611 length=36 ACCCGCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC In principle we want to extract the first two lines in each block-of-4 and replace @ with >.

    Read the article

  • linux user login/logout log for computer restriction

    - by Cedric
    Hi ! I would like to know how to log the login and logout of a user. I know it's possible to use the command "last". But this command is based on a file that has a r/w permission for the user, hence the possibility to change these data. I would like to log these data over two months. Why would I like to do that ? In fact, I would like to prevent a normal user to use a computer more than an hour a day - except week-ends, and 10 hours in total a week. Cedric System used : kubuntu, Programming language : bash script

    Read the article

  • ./configure : /bin/sh^M : bad interpreter

    - by Vineeth
    Hello there, I've been trying to install lpng142 on my fed 12 system. Seems like a problem to me. I get this error [root@localhost lpng142]# ./configure bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory [root@localhost lpng142]# How do I fix this? and for more details, I shall include the /etc/fstub file details here # # /etc/fstab # Created by anaconda on Wed May 26 18:12:05 2010 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=ce67cf79-22c3-45d4-8374-bd0075617cc8 /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 [root@localhost etc]# Help, please

    Read the article

  • IF statement error in tcsh

    - by kaustav datta
    Having trouble executing an IF statement through tcsh. This works FINE for me - #!/bin/bash if echo `cal|tail -6|sed -e 's/^.\{3\}//' -e 's/.\{3\}$//' |tr -s '[:blank:]' '\n' | head -11|tail -10|tr -s '\n' ' '`|grep -w `date "+%e"` then echo "present" else echo "absent" fi This is the PROBLEM - #!/bin/tcsh if echo `cal|tail -6|sed -e 's/^.\{3\}//' -e 's/.\{3\}$//' |tr -s '[:blank:]' '\n' | head -11|tail -10|tr -s '\n' ' '`|grep -w `date "+%e"` then echo "present" else echo "absent" endif Getting this error- if: Expression Syntax. then: Command not found. I really need this to run using "tcsh"

    Read the article

  • nested if: too many arguments?

    - by FLX
    For some reason this code creates problems: source="/foo/bar/" destination="/home/oni/" if [ -d $source ]; then echo "Source directory exists" if [ -d $destination ]; then echo "Destination directory exists" rsync -raz --delete --ignore-existing --ignore-times --size-only --stats --progress $source $destination chmod -R 0755 $destination else echo "Destination directory does not exists" fi else echo "Source directory does not exists" fi It errors out with: Source directory exists /usr/bin/copyfoo: line 7: [: too many arguments Destination directory does not exists I used nested if statements in bash before without a problem, what simple mistake am I overlooking? Thanks!

    Read the article

  • Smart auto-completition for staged git file names, used with difftool

    - by piobyz
    I'd like to have a smart auto-completition of currently staged file names when using git diff. Example: modified: DIR1/LongCamelCaseFileName.h modified: DIR1/AnotherLongCamelCaseFileName.m modified: DIR1/AndThereAreALotOfThemInDir1.m modified: DIR2/file4.m and here, using bash tab-auto-complete functionality I'd like to use it with git diff where by smart I mean that after typing git diff I'd need to type only a short part of the staged file name that I want to diff, and without a dirname, so for example git diff And<TAB> would result in git diff AndThereAreALotOfThemInDir1.m Actually, without a dir-ommiting-part it would be still useful (auto-completing using only staged files pool).

    Read the article

  • How do i use RVM w/ Hudson CI server on Debian?

    - by JoshReedSchramm
    I'm trying to setup an automated "build" server for my rails projects using Hudson CI. SO far it's able to run specs and do metrics on the code but I have 2 different projects dependent on 2 different versions of ruby. So i'm trying to use RVM to run multiple copies of ruby then switch back and forth in a pre-build step. I found a couple posts like this one that try and explain how to make this work, but I'm not running a startup script for hudson, it starts on boot which is how it worked out of the box when i installed it via the debian instructions. The problem seems to be that even though hudson runs under the "hudson" account and that account has rvm installed (and working) when it tries to run a shell based prebuild step to call rvm switch 1.8.7 it fails with the error "rvm: command not found" Not sure what I'm doing wrong. Hudson is using SH as its shell but i also tried using bash. no luck. Has anyone gotten this working before in this setup?

    Read the article

  • Textmate bundle to remove a directory and build with Jekyll

    - by m1755
    I am looking for a simple Textmate bundle that will do the following two tasks in order: Delete the entire contents (including folders) of a directory (eg. ~/Sites/my_site). Run the jekyll command in the directory of the Textmate project. I am going to associate this with a "save current file" and use it to auto build my Jekyll site into the specified directory each time I save a file inside the project. Notes If #2 isn't possible, then cd into a specified directory and run the jekyll command. Would prefer bash or ruby.

    Read the article

  • Eclipse and Cassandra

    - by H2oNinja
    I've searched various websites for instructions on how to link 'Cassandra' and 'Eclipse' and followed directions to the last detail in several sites. For some reason, while using Git Bash, I cant get through the 'ant build', via instruction in said mentioned locations across the web. In some location's its easy, just make sure you have; 1. Apache Cassandra source 2. Apache Ant 3. Git So, yeah I've downloaded all the above, tried the same directory, different directories, etc., although still unable to get past the middle step of 'ant build'. Here are a few websites I've used to muddle through setting up the Src code for both utilities, 'Eclipse' and 'Cassandra'. http://uisurumadushanka89.blogspot.com/2012/02/apache-cassandra-how-to-setup-source.html and http://wiki.apache.org/cassandra/RunningCassandraInEclipse both resulting in an immediate halt at the 'ant build'. any insights are information is greatly appreciated. Thank-you, Ryan

    Read the article

  • ssh-keygen accepting stdin

    - by Ryan
    I am trying to call ssh-keygen using a variable through bash as an input instead of a file to get a fingerprint of a public key. This method does not work as it says the key file is invalid (it's correct for sure) echo $pubkey | ssh-keygen -lf /dev/stdin This does work ssh-keygen -lf /dev/stdin < alpha.pub This does not work because I get an ambiguous redirect ssh-keygen -lf /dev/stdin < $(echo $pubkey) I would appreciate some insight as to how to get ssh-keygen to read from a variable with a public key and if possible, an explanation as to why the redirects aren't doing what I think they should be doing. I searched online but many of the redirect tutorials didn't seem to answer my questions.

    Read the article

  • Compatibility of x-www-browser

    - by rohit.arondekar
    I want to open html files from a shell script. I know that Ubuntu has a command x-www-browser that will open the default browser on the system. I also found via some Googling that the command is part of the debian system. I was wondering if the command is available on non debian based distros. If it isn't is there a standard way of opening an html file in the default browser on a linux OS via command line? Note that I'm using Bash.

    Read the article

  • Best way to choose a random file from a directory in a shell script

    - by jhs
    What is the best way to choose a random file from a directory in a shell script? Here is my solution in Bash but I would be very interested for a more portable (non-GNU) version for use on Unix proper. dir='some/directory' file=`/bin/ls -1 "$dir" | sort --random-sort | head -1` path=`readlink --canonicalize "$dir/$file"` # Converts to full path echo "The randomly-selected file is: $path" Anybody have any other ideas? Edit: lhunath makes a good point about parsing ls. I guess it comes down to whether you want to be portable or not. If you have the GNU findutils and coreutils then you can do: find "$dir" -maxdepth 1 -mindepth 1 -type f -print0 \ | sort --zero-terminated --random-sort \ | sed 's/\d000.*//g/' Whew, that was fun! Also it matches my question better since I said "random file". Honsetly though, these days it's hard to imagine a Unix system deployed out there having GNU installed but not Perl 5.

    Read the article

  • Auto SSH and execute script

    - by rohanbk
    I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script. Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script? I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.

    Read the article

  • How to use > in an xargs command?

    - by jesse
    I want to find a bash command that will let me grep every file in a directory and write the output of that grep to a separate file. My guess would have been to do something like this ls -1 | xargs -I{} "grep ABC '{}' > '{}'.out" but, as far as I know, xargs doesn't like the double-quotes. If I remove the double-quotes, however, then the command redirects the output of the entire command to a single file called '{}'.out instead of to a series of individual files. Does anyone know of a way to do this using xargs? I just used this grep scenario as an example to illustrate my problem with xargs so any solutions that don't use xargs aren't as applicable for me.

    Read the article

  • How to extract all IDs accessed from a mysql general log using the linux commandline?

    - by shlomoid
    This should be a trivial question for anyone who's good with bash/sed/awk. Unfortunately, I'm not, yet :) I've got a general log from MySQL which contains some queries that have a common parameter, they query on a specific id field. The queries look like update tbl set col='binary_values' where id=X; I need to process the log and extract all the IDs that these queries touched, each in it's own line. The purpose of this is to figure out how many times each ID is accessed. Eventually I'd group and count the values. The binary values are indeed binary junk, so they kinda messed up some things I've been trying to do. Eventually we solved the problem temporarily using a python script, but I'm sure the linux command line tool set can do it too. How would you do it?

    Read the article

  • Ending tail -f started in a shell script

    - by rangalo
    I have the following. A Java process writing logs to the stdout A shell script starting the Java process Another shell script which executes the previous one and redirects the log I check the log file with the tail -f command for the success message. Even if I have exit 0 in the code I cannot end the tail -f process. Which doesn't let my script to finish. Is there any other way of doing this in Bash? The code looks like the following. function startServer() { touch logfile startJavaprocess > logfile & tail -f logfile | while read line do if echo $line | grep -q 'Started'; then echo 'Server Started' exit 0 fi done }

    Read the article

  • More efficient way to find & tar millions of files

    - by Stu Thompson
    I've got a job running on my server at the command line prompt for a two days now: find data/ -name filepattern-*2009* -exec tar uf 2008.tar {} ; It is taking forever, and then some. Yes, there are millions of files in the target directory. But just running... find data/ -name filepattern-*2009* -print > filesOfInterest.txt ...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks.. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script? A secondary questions is "why is my current approach so slow?"

    Read the article

  • Combining DROP USER and DROP DATABASE with SELECT .. WHERE query?

    - by zsero
    I'd like to make a very simple thing, replicate the functionality of mysql's interactive mysql_secure_installation script. My question is that is there a simple, built-in way in MySQL to combine the output of a SELECT query with the input of a DROP user or DROP database script? For example, if I'd like to drop all users with empty passwords. How could I do that with DROP USER statement? I know an obvious solution would be to run everything for example from a Python script, run a query with mysql -Bse "select..." parse the output with some program construct the drop query run it. Is there an easy way to do it in a simple SQL query? I've seen some example here, but I wouldn't call it simple: http://stackoverflow.com/a/12097567/518169 Would you recommend making a combined query, or just to parse the output using for example Python or bash scripts/sed?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >