Search Results

Search found 5228 results on 210 pages for 'bash alias'.

Page 92/210 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • How to avoid escaping by accident in PERL using system()?

    - by Brian
    I want to run some commands using the system() command, I do this way: execute_command_error("trash-put '/home/$filename'"); Where execute_command_error will report if there was an error with whatever system command it ran. I know I could just unlink the file using Perl commands, but I want to delete stuff using trash-put as it's a type of recycling program. My problem is that $filename will sometimes have apostrophes, quotes, and other weird characters in it that mess up the system command or Perl itself.

    Read the article

  • Folder Renaming After Tar Extraction

    - by Chris S
    I have an tarball, myarchive.tar.gz. When I uncompress it using "tar -zxvf myarchive.tar.gz", it creates a folder myarchive-x980-2303-ssioo. What's the easiest way to automatically rename the extracted folder to ensure it matches the name of the archive? I've checked tar's manpage, but it doesn't seem to have an option for this.

    Read the article

  • Shell script to count files, then remove oldest files

    - by Nic Hubbard
    I am new to shell scripting, so I need some help here. I have a directory that fills up with backups. If I have more than 10 backup files, I would like to remove the oldest files, so that the 10 newest backup files are the only ones that are left. So far, I know how to count the files, which seems easy enough, but how do I then remove the oldest files, if the count is over 10? if [ls /backups | wc -l > 10] then echo "More than 10" fi

    Read the article

  • UNIX Programs (Shell Scripting) [closed]

    - by atif089
    Hi, I have an exam tomorrow and I need some help with these programs. Or if you can tell me where I can get these. Write a program which uses grep to search a file for a pattern and display search patterns on standard output Write an awk program to print only odd numbered lines of a file. Write a program to open the command ls and give the output to the command through which we count the number of files Thank You :)

    Read the article

  • Selectively parsing log files using Java

    - by GPX
    I have to parse a big bunch of log files, which are in the following format. SOME SQL STATEMENT/QUERY DB20000I The SQL command completed successfully. SOME OTHER SQL STATEMENT/QUERY DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. EDIT 1: The first 3 lines (including a blank line) indicate an SQL statement executed successfully, while the next three show the statement and the exception it caused. darioo's reply below, suggesting the use of grep instead of Java, works beautifully for a single line SQL statement. EDIT 2: However, the SQL statement/query might not be a single line, necessarily. Sometimes it is a big CREATE PROCEDURE...END PROCEDURE block. Can this problem be overcome using only Unix commands too? Now I need to parse through the entire log file and pick all occurrences of the pair of (SQL statement + error) and write them in a separate file. Please show me how to do this!

    Read the article

  • shell_exec syntax error. running in terminal directly is ok

    - by Alex
    Having this command: $command = "diff -bBdH --ignore-all-space <(echo 'hi') <(echo 'hi1')"; echo $command; $result = shell_exec($command); On the screen I see: sh: 1: Syntax error: "(" unexpected diff -bBdH --ignore-all-space <(echo 'hi') <(echo 'hi1') If I copy-paste the second line from the console output into the terminal, the result would be correct. (Reproduced on another machine too). I'm missing something dead simple here and can't see what it is. besides, why is my output reversed? I'm clearly echoing the command before executing it, thus the syntax error of the shell should appear after the shell_exec

    Read the article

  • How to line up columns using paste(1)? or how to make an aligned table merging lines in the shell?

    - by nn
    Hi, I want to merge lines such that the merged lines are aligned on the same boundary. UNIX paste(1) does this nicely when lines all meet at the same tab boundary, but when lines differ in size (in the file that lines are being merged into), the text comes out awkward. Example of paste(1) that has the desired effect: $ echo -e "a\nb\nccc\nd" | paste - - a b ccc d Example of paste(1) with undesired effect: $ echo -e "a\nb\ncccccccccccc\nd" | paste - - a b cccccccccccc d Note how the 2nd column doesn't line up. I want 'b' to line up with 'd', which requires an additional tab. Unfortunately I believe this is the limit for the paste utility, so if anyone has any idea of how to get the desired effect above, I'd love to hear it.

    Read the article

  • How to perform an action when file changed?

    - by ZeissS
    Hi, I want to create a script that checks an URL and perform an action (download + unzip) when the "Last-Modified" header of the remote file changed. I thought about fetching the header with curl but then I have to store it somewhere for each file and perform a date comparison. Does anyone have a different idea using (mostly) standard unix tools? thanks

    Read the article

  • finding empty directories unix

    - by soField
    i need to find empty directories for given list of directories some directories have directories inside it if inside directories also empty i can say main directory is empty otherwise it's not empty how can i test this for example A>A1(file1),A2 this is not empty beacuse of file1 B>B1(no file) this is empty C>C1,C2 this is empty thanks

    Read the article

  • script to sum all numbers in a file (linux)

    - by Mark Roberts
    I have a file which contains several thousand numbers, each on it's own line: 34 42 11 6 2 99 ... I'm looking to write a script which will print the sum of all numbers in the file. I've got a solution, but it's not very efficient. (It takes several minutes to run.) I'm looking for a more efficient solution. Any suggestions?

    Read the article

  • sed find pattern on line with another pattern

    - by user2962390
    I am trying to extract text from a file between a '<' and a '', but only on a line starting with another specific pattern. So in a file that looks like: XXX Something here XXX Something more here XXX <\Lines like this are a problem ZZZ something <\This is the text I need XXX Don't need any of this I would like to print only the "<\This is the text I need". If I do sed -n '/^ZZZ/p' FILENAME it pulls the correct lines I need to look at, but obviously prints the whole line. sed -n '/</,//p' FILENAME prints way too much. I have looked into grouping and tried sed -n '/^ZZZ/{/</,//} FILENAME but this doesn't seem to work at all. Any suggestions? They will be much appreciated. (Apologies for formatting, never posted on here before)

    Read the article

  • Uniq in awk; removing duplicate values in a column using awk

    - by D W
    I have a large datafile in the following format below: ENST00000371026 WDR78,WDR78,WDR78, WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 2, ENST00000371023 WDR32 WD repeat domain 32 isoform 2 ENST00000400908 RERE,KIAA0458, atrophin-1 like protein isoform a,Homo sapiens mRNA for KIAA0458 protein, partial cds., The columns are tab separated. Multiple values within columns are comma separated. I would like to remove the duplicate values in the second column to result in something like this: ENST00000371026 WDR78 WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 1,WD repeat domain 78 isoform 2, ENST00000371023 WDR32 WD repeat domain 32 isoform 2 ENST00000400908 RERE,KIAA0458 atrophin-1 like protein isoform a,Homo sapiens mRNA for KIAA0458 protein, partial cds., I tried the following code below but it doesn't seem to remove the duplicate values. awk ' BEGIN { FS="\t" } ; { split($2, valueArray,","); j=0; for (i in valueArray) { if (!( valueArray[i] in duplicateArray)) { duplicateArray[j] = valueArray[i]; j++; } }; printf $1 "\t"; for (j in duplicateArray) { if (duplicateArray[j]) { printf duplicateArray[j] ","; } } printf "\t"; print $3 }' knownGeneFromUCSC.txt How can I remove the duplicates in column 2 correctly?

    Read the article

  • Import files directly to SVN repo without checking out first

    - by Werner
    Hi, I am using SVN and have a repository on a remote machine. Sometimes, when working on my local machine I realize that I need to add some new files to the repo. The usual procedure I know would then be: 1- at the current folder on my local machine checkout the whole SVN repo 2- enter there 3- copy the interesting file here 4- commit But this can be a bit tedious. I wonder if somehow, I can omit steps 1 to 3 and import the "interesting" file to SVN directly without necessity of checking out the repo first. Thanks

    Read the article

  • Sed. How change line next to specific pattern

    - by kirill
    My file is: DIVIDER Sometext_string many lines of random text DIVIDER Another_Sometext_string many many lines DIVIDER Third_sometext_string .... How change lines following DIVIDER pattern Result must be: DIVIDER [begin]Sometext_string[end] many lines of random text DIVIDER [begin]Another_Sometext_string[end] many many lines DIVIDER [begin]Third_sometext_string[end] ....

    Read the article

  • Listing the content of a tar file or a directory only down to some level

    - by Tim
    I wonder how to list the content of a tar file only down to some level? I understand tar tvf mytar.tar will list all files, but sometimes I would like to only see directories down to some level. Similarly, for the command ls, how do I control the level of subdirectories that will be displayed? By default, it will only show the direct subdirectories, but not go further.

    Read the article

  • How to remove all words written in capital letters ONLY (by using sed and/or awk)

    - by Virtual_Lotos
    I am trying to delete all words written in capital letters only by using sed: sed -r "s/\b[A-Z]\w*\s*//g" < file1 > file2 But this solution capture all the words starting with capital letters and delete them (this in not the goal). Here's an example: file1 content: AAAAAAAAAAAA BBbbbbb AbAbAbAb aaaaaBBBBB AAAAAA BBBBBB A1-B1 a1-b1 A1-b1 AA AAAAA BBBBB AAAAA Abbbb AAA AAAAA AAAABB Abbbb Baaaa Aaaaa AB AAAAAA1 BBBBBBb AAAAAA 1 BBBBBB b Result should be like this (file2 content): BBbbbbb AbAbAbAb aaaaaBBBBB A1-B1 a1-b1 A1-b1 AA Abbbb AAA Abbbb Baaaa Aaaaa AB AAAAAA1 BBBBBBb AAAAAA 1 BBBBBB b Each line of at least one digit or one lowercase letter should remain intact (should not be deleted).

    Read the article

  • Get file name before the extension

    - by ryanprayogo
    I have some files in the same directory (in UNIX filesystem) that looks like: a.txt.name b.xml.name c.properties.name a.txt.name2 b.xml.name2 c.properties.name2 How do I get the string before the name or name2 part using some shell command? ie. the a.txt, b.xml, c.properties part?

    Read the article

  • symlink files newer than X age, then later remove symlink once file ages?

    - by bleomycin
    Hello everyone, i have a large number of files/folders coming in each day that are being sorted automatically to a wide variety of folders. I'm looking for a way to automatically find these files/folders and create symlinks to them all within an "incoming" folder. Searching for file age should be sufficient for finding the files, however searching for age and owner would be ideal. Then once the files/folders being linked to reach a certain age, say 5 days, remove the symlinks to them automatically from the "incoming" folder. Is this possible to do with a simple shell or python script that can be run with cron? Thanks!

    Read the article

  • Parsing result of Diff in Shell Script

    - by Saobi
    I want to compare two files and see if they are the same or not in my shell script, my way is: diff_output=`diff ${dest_file} ${source_file}` if [ some_other_condition -o ${diff_output} -o some_other_condition2 ] then .... fi Basically, if they are the same ${diff_output} should contain nothing and the above test would evaluate to true. But when I run my script, it says [: too many arguments On the if [....] line. Any ideas?

    Read the article

  • How do I conditionally redirect the output of a command to /dev/null?

    - by Lawrence Johnston
    I have a script. I would like to give this script a quiet mode and a verbose mode. This is the equivalent of: if $verbose then redirect="> /dev/null" fi echo "Verbose mode enabled" $redirect # This doesn't work because the redirect isn't evaluated. I'd really like a better way of doing this than writing if-elses for every statement affected. eval could work, but has obvious side effects on other variables.

    Read the article

  • find: What's up with basename and dirname?

    - by temp2290
    I'm using find for a task and I noticed that when I do something like this: find `pwd` -name "file.ext" -exec echo $(dirname {}) \; it will give you dots only for each match. When you s/dirname/basename in that command you get the full pathnames. Am I screwing something up here or is this expected behavior? I'm used to basename giving you the name of the file (in this case "file.ext") and dirname giving you the rest of the path.

    Read the article

  • Linux: programatically setting a permanent environment variable

    - by Richard
    Hello All, I am writing a little install script for some software. All it does is unpack a target tar, and then i want to permanently set some environment variables - principally the location of the unpacked libs and updating $PATH. Do I need to programmatically edit the .bashrc file, adding the appropriate entries to the end for example, or is there another way? What's standard practice? Thanks

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >