Search Results

Search found 9551 results on 383 pages for 'john shell'.

Page 27/383 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How to Detect that Current (Bash) Shell is a (Vi/Vim) Subshell?

    - by Jeet
    From inside Vi/Vim, I can type: :shell to drop into a shell. Is there any way to detect that I am in a Vi-spawned subshell? The environmental variable SHLVL is 2, but that does not tell me explicitly that I am in a Vi/Vim-spawned subshell. On OS X, the following variables are also set: MYVIMRC, VIMRUNTIME, VIM. How universal are these? Can I count on these being set in any system, if and only if I am in a Vi/Vim subshell? If not, is there any portable, robust and hopefully efficient way to tell that I am in a Vi/Vim subshell? Thanks.

    Read the article

  • mysql -e option with variable data - Pass the variable value to insert sql statement in shell script

    - by Ahn
    The following shell script is not inserting the data to the table. How to pass the variable value to insert sql statement in a shell script. id=0 while true do id=`expr $id + 1`; mysql -u root -ptest --socket=/data/mysql1/mysql.sock -e 'insert into mytest1.mytable2(id,name) values (' $id ',"testing");' echo $id >> id.txt done I have modified the script as below and tried, and still having the issue id=0 while true do id=`expr $id + 1`; # mysql -u root -ptest --socket=/data/mysql1/mysql.sock1 -e 'insert into mytest1.mytable1(name) values ("amma");' mysql -u root -ptest --socket=/data/mysql1/mysql.sock -e 'insert into mytest1.mytable2(id,name) values ( $id ,"testing");' echo $id >> id.txt done error : ]$ ./insert ERROR 1054 (42S22) at line 1: Unknown column '$id' in 'field list'

    Read the article

  • Read a variable in bash with a default value

    - by rmarimon
    I need to read a value from the terminal in a bash script. I would like to be able to provide a default value that the user can change. # Please enter your name: Ricardo^ In this script the prompt is "Please enter your name: " the default value is "Ricardo" and the cursor would be after the default value. Is there a way to do this in a bash script?

    Read the article

  • AWK scripting :How to remove Field separator using awk

    - by anil-1985
    Need the following output ONGC044 ONGC043 ONGC042 ONGC041 ONGC046 ONGC047 from this input Medium Label Medium ID Free Blocks =============================================================================== [ONGC044] ECCPRDDB_FS_43 ac100076:4aed9b39:44f0:0001 195311616 [ONGC043] ECCPRDDB_FS_42 ac100076:4aed9b1d:44e8:0001 195311616 [ONGC042] ECCPRDDB_FS_41 ac100076:4aed9af4:4469:0001 195311616 [ONGC041] ECCPRDDB_FS_40 ac100076:4aed9ad3:445e:0001 195311616 [ONGC046] ECCPRDDB_FS_44 ac100076:4aedd04a:68c6:0001 195311616 [ONGC047] ECCPRDDB_FS_45 ac100076:4aedd4a0:6bf5:0001 195311616

    Read the article

  • extract payload from tcpflow output

    - by Felipe Alvarez
    Tcpflow outputs a bunch of files, many of which are HTTP responses from a web server. Inside, they contain HTTP headers, including Content-type: , and other important ones. I'm trying to write a script that can extract just the payload data (i.e. image/jpeg; text/html; et al.) and save it to a file [optional: with an appropriate name and file extension]. The EOL chars are \r\n (CRLF) and so this makes it difficult to use in GNU distros (in my experiences). I've been trying something along the lines of: sed /HTTP/,/^$/d To delete all text from the the beginning of HTTP (incl) to the end of \r\n\r\n (incl) but I have found no luck. I'm looking for help from anyone with good experience in sed and/or awk. I have zero experience with Perl, please I'd prefer to use common GNU command line utilities for this Find a sample tcpflow output file here. Thanks, Felipe

    Read the article

  • Check for messages apache Qpid

    - by c0mrade
    Is it possible to check for messages from Qpid queue from unix/windows console? Here is how I check via GUI : http://i47.tinypic.com/pbu5d.gif I can see all the info from Qpid JMX Management Console, is there a something close to this that I can use in console?

    Read the article

  • Trouble with piping through sed

    - by Joel
    I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere. wget -r -nv http://127.0.0.1:3000/test.html Outputs: 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1] I pipe the output through sed to get a clean list of URLs: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' Outputs: http://127.0.0.1:3000/test.html http://127.0.0.1:3000/robots.txt http://127.0.0.1:3000/shop I would like to then dump the output to file, so I do this: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE I interrupt the process after a few seconds and check the file, yet it is empty. Interesting, the following yields no output (same as above, but piping sed output through cat): wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat Why can I not pipe the output of sed to another program like cat?

    Read the article

  • input of while loop to come from output of `command`

    - by Felipe Alvarez
    #I used to have this, but I don't want to write to the disk # pcap="somefile.pcap" tcpdump -n -r $pcap > all.txt while read line; do ARRAY[$c]="$line" c=$((c+1)) done < all.txt The following fails to work. # I would prefer something like... # pcap="somefile.pcap" while read line; do ARRAY[$c]="$line" c=$((c+1)) done < $( tcpdump -n -r "$pcap" ) Too few results on Google (doesn't understand what I want to find :( ). I'd like to keep it Bourne-compatible (/bin/sh), but it doesn't have to be.

    Read the article

  • Anyone have a database of file extensions & icons to go with the extensions?

    - by neddy
    Ok, im developing some software which requires file icons to display lists of files on a computer... i don't want to use the system ExtractAssociatedIcon api's i'd rather load the icons for the file extensions out of a database... (as some systems may not have certain files associated etc)... Does anyone have a database of file extensions & icons to go with the extensions that i can use? Cheers in advance,

    Read the article

  • How can I get the associated ref path for a git SHA?

    - by andreb
    Hi, I want to be able to pass anything to a git command (maybe its a SHA, maybe it's just something like "origin/master" or "devel/epxerimental" etc.) and git tells me the ref path of the branch that the passed something lives in, e.g. <git_command> 0dc27819b8e9 => output: refs/heads/master <git_command> xyz/test => output: refs/remotes/xyz/master ... I've been looking at git show or git log or git rev-parse and apart from --pretty=format:%d I couldn't find anything. (--pretty=format:%d output is quite strange with lotsa free space and empty lines and sometimes more than one ref paths are on one line bunched together). There has to be a better way? Thanks for reading. Andre

    Read the article

  • how to add a function to that program, and call that function from the command line in the function

    - by user336291
    a#include "smallsh.h" /*include file for example*/ /*program buffers and work pointers*/ static char inpbuf[MAXBUF], tokbuf[2*MAXBUF], *ptr = inpbuf, *tok = tokbuf; userin(p) /*print prompt and read a line*/ char *p; { int c, count; /*initialization for later routines*/ ptr = inpbuf; tok = tokbuf; /*display prompt*/ printf("%s ",p); for(count = 0;;) { if((c = getchar()) == EOF) return(EOF); if(count<MAXBUF) inpbuf[count++] = c; if(c == '\n' && count <MAXBUF) { inpbuf[count] = '\0'; return(count); } /*if line too long restart*/ if(c == '\n') { printf("smallsh:input line too long\n"); count = 0; printf("%s",p); } } } gettok(outptr) /*get token and place into tokbuf*/ char **outptr; { int type; *outptr = tok; /*strip white space*/ for(;*ptr == ' ' || *ptr == '\t'; ptr++) ; *tok++ = *ptr; switch(*ptr++) { case '\n': type = EOL; break; case '&': type = AMPERSAND; break; case ';': type = SEMICOLON; break; case '#': type = POUND; break; default: type = ARG; while(inarg(*ptr)) *tok++ = *ptr++; } *tok++ = '\0'; return(type); } static char special[]= {' ', '\t', '&', ':', '\n', '\0'}; inarg(c) /*are we in an ordinary argument*/ char c; { char *wrk; for(wrk = special;*wrk != '\0';wrk++) if(c == *wrk) return(0); return(1); } #include "smallsh.h" procline() /*process input line*/ { char *arg[MAXARG+1]; /*pointer array for runcommand*/ int toktype; /*type of token in command*/ int narg; /*number of arguments so far*/ int type; /*FOREGROUND or BACKGROUND*/ for(narg = 0;;) { /*loop FOREVER*/ /*take action according to token type*/ switch(toktype = gettok(&arg[narg])) { case ARG: if(narg<MAXARG) narg++; break; case EOL: case SEMICOLON: case AMPERSAND: case POUND: type = (toktype == AMPERSAND) ? BACKGROUND : FOREGROUND; if(narg!=0) { arg[narg] = NULL; runcommand(arg, type); } if((toktype == EOL)||(toktype=POUND)) return; narg = 0; break; } } } #include "smallsh.h" /*execute a command with optional wait*/ runcommand(cline,where) char **cline; int where; { int pid, exitstat, ret; if((pid = fork()) <0) { perror("smallsh"); return(-1); } if(pid == 0) { /*child*/ execvp(*cline, cline); perror(*cline); exit(127); } /*code for parent*/ /*if background process print pid and exit*/ if(where == BACKGROUND) { printf("[Process id %d]\n", pid); return(0); } /*wait until process pid exists*/ while( (ret=wait(&exitstat)) != pid && ret != -1) ; return(ret == -1 ? -1 : exitstat); } #include "smallsh.h" char *prompt = "Command>"; /*prompt*/ main() { while(userin(prompt) != EOF) procline(); }

    Read the article

  • ksh: Iterate through a range

    - by sgreeve
    How can I iterate through a simple range of ints using a for loop in ksh? For example, my script currently does this... for i in 1 2 3 4 5 6 7 do #stuff done ...but I'd like to extend the range way above 7. Is there a better syntax? Thanks!

    Read the article

  • Iterating over each line of ls -l output

    - by Ivan
    I want to iterate over each line in the output of ls -l /some/dir/* Right now I'm trying: for x in ls -l $1; do echo $x done, however this iterates over each element in the line seperately, so i get -r--r----- 1 ivanevf eng 1074 Apr 22 13:07 File1 -r--r----- 1 ivanevf eng 1074 Apr 22 13:17 File2 I want to iterate over each line as a whole, though. How do I do that? Thanks.

    Read the article

  • echo value inside a variable ?

    - by Kimi
    x=102 y=x means when i echo $y it gives x echo $y x --and not 102 and when i echo $x it give 102 lets say I dnt know what is inside y and i want the value of x to be echoed with using y someting like this a=`echo $(echo $y)` echo $a Ans 102

    Read the article

  • GUNZIP / Extract file "portion by portion"

    - by Dave
    Hi. I'm on a shared server with restricted disk space and i've got a gz file that super expands into a HUGE file, more than what i've got. How can I extract it "portion" by "portion (lets say 10 MB at a time), and process each portion, without extracting the whole thing even temporarily! No, this is just ONE super huge compressed file, not a set of files please...

    Read the article

  • bash "map" equivalent: run command on each file

    - by Claudiu
    I often have a command that processes one file, and I want to run it on every file in a directory. Is there any built-in way to do this? For example, say I have a program data which outputs an important number about a file: ./data foo 137 ./data bar 42 I want to run it on every file in the directory in some manner like this: map data `ls *` ls * | map data to yield output like this: foo: 137 bar: 42

    Read the article

  • script to delete all /n number of lines starting from a word except last line

    - by akvikram
    how to delete all lines below a word except last line in a file. suppose i have a file which contains | 02/04/2010 07:24:20 | 20-24 | 26 | 13 | 2.60 | | 02/04/2010 07:24:25 | 25-29 | 6 | 3 | 0.60 | +---------------------+-------+------------+----------+-------------+ 02-04-2010-07:24 --- ER GW 03 +---------------------+-------+------------+----------+-------------+ | date | sec | BOTH_MO_MT | MO_or_MT | TPS_PER_SEC | +---------------------+-------+------------+----------+-------------+ | 02/04/2010 07:00:00 | 00-04 | 28 | 14 | 2.80 | | 02/04/2010 07:00:05 | 05-09 | 27 | 14 | 2.70 | ... ... ... ... END OF TPS PER 5 REPORT and i need to delete all contents from "02-04-2010-07:24 --- ER GW 03" except "END OF TPS PER 5 REPORT" and save the file. This has to be done for around 700 files. all files are same format, with datemonthday filename.

    Read the article

  • How to run command in the background and notify me via email when done

    - by Abs
    Hello all, I have the following command which will take ages to run (couple of hours). I would like to make it a background process and for it to send me an email when its done. For the cherry on the top, any errors it encountered should write to a text file when an error has occurred? find . -type f -name "*.rm" -exec ./rm2mp3.sh \{} \; -exec rm \{} \; How can I do this with my above command?

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • What is the difference between "su --command" and "su --session-command"?

    - by oliver
    Running # su - oliver --command bash gives a shell but also prints the warning bash: no job control in this shell, and indeed Ctrl+Z and fg/bg don't work in that shell. Running # su - oliver --session-command bash gives a shell without printing the warning, and job control indeed works. The suggestion to use --session-command comes from Starting a shell from scripts using su results in "no job control in this shell" which states "[a security fix for su] changed the behavior of the -c option and disables job control inside the called shell". But I still don't quite understand this. When should one use --command and when should one use --session-command? Is --command (aka -c) more secure? Or should one always use --session-command, and --command is just left in for backwards compatibility? FWIW, I'm using RHEL 6.4.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >