Search Results

Search found 1799 results on 72 pages for 'yahoo pipes'.

Page 14/72 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to use YQL to merge 2 RSS feeds sorted by pubDate?

    - by jnman
    Seeing that YQL is being promoted as a good way to do things, I was curious as to how to use YQL to fetch and merge 2 different feeds into one (sorted by pubDate). It's pretty trivial to fetch 2 feeds but it turns out that the feeds are just concatenated together and not merged. Here's the sample code. select channel.title,channel.link,channel.item.title,channel.item.link from xml where url in( 'http://code.flickr.com/blog/feed/rss/', 'http://feeds.delicious.com/v2/rss/codepo8?count=15', 'http://www.stevesouders.com/blog/feed/rss', 'http://www.yqlblog.net/blog/feed/', 'http://www.quirksmode.org/blog/index.xml' )

    Read the article

  • Capturing exit status from STDIN in Perl

    - by zigdon
    I have a perl script that is run with a command like this: /path/to/binary/executable | /path/to/perl/script.pl The script does useful things to the output for the binary file, then exits once STDIN runs out (< returns undef). This is all well and good, except if the binary exits with a non-zero code. From the script's POV, it thinks the script just ended cleanly, and so it cleans up, and exits, with a code of 0. Is there a way for the perl script to see what the exit code was? Ideally, I'd want something like this to work: # close STDIN, and if there was an error, exit with that same error. unless (close STDIN) { print "error closing STDIN: $! ($?)\n"; exit $?; } But unfortunately, this doesn't seem to work: $ (date; sleep 3; date; exit 1) | /path/to/perl/script.pl /tmp/test.out Mon Jun 7 14:43:49 PDT 2010 Mon Jun 7 14:43:52 PDT 2010 $ echo $? 0 Is there a way to have it Do What I Mean?

    Read the article

  • Pipe data from InputStream to OutputStream in Java

    - by Wangnick
    Dear all, I'd like to send a file contained in a ZIP archive unzipped to an external program for further decoding and to read the result back into Java. ZipInputStream zis = new ZipInputStream(new FileInputStream(ZIPPATH)); Process decoder = new ProcessBuilder(DECODER).start(); ??? BufferedReader br = new BufferedReader(new InputStreamReader( decoder.getInputStream(),"us-ascii")); for (String line = br.readLine(); line!=null; line = br.readLine()) { ... } What do I need to put into ??? to pipe the zis content to the decoder.getOutputStream()? I guess a dedicated thread is needed, as the decoder process might block when its output is not consumed.

    Read the article

  • Detecting death of spawned process using Window CRT

    - by Michael Tiller
    Executive summary: I need a way to determine whether a Windows process I've spawned via _spawnl and am communicating with using FDs from _pipe has died. Details: I'm using the low-level CRT function in Windows (_eof, _read) to communicate with a process that was spawned via a call to _spawnl (with the P_NOWAIT) flag. I'm using _pipe to create file descriptors to communicate with this spawned process and passing those descriptors (the FD #) to it on the command line. It is worth mentioning that I don't control the spawned process. It's a black box to me. It turns out that the process we are spawning occasionally crashes. I'm trying to make my code robust to this by detecting the crash. Unfortunately, I can't see a way to do this. It seems reasonable to me to expect that a call to _eof or _read on one of those descriptors would return an error status (-1) if the process had died. Unfortunately, that isn't the case. It appears that the descriptors have a life of their own independent of the spawned process. So even though the process on the other end is dead, I get no error status on the file descriptor I'm using to communicate with it. I've got the PID for the nested process (returned from the _spanwnl call) but I don't see anything I can do with that. My code works really well except for one thing. I can't detect whether the spawned process is simply busy computing me an answer or has died. If I can use the information from _pipe and _spawnl to determine if the spawned process is dead, I'll be golden. Suggestions very welcome. Thanks in advance. UPDATE: I found a fairly simple solution and added it as the selected answer.

    Read the article

  • Pipe implementation

    - by nunos
    I am trying to implement a linux shell that supports piping. I have already done simple commands, commands running in background, redirections, but piping is still missing. I have already read about it and seen some snippets of code, but still haven't been able to sort out a working solution. What I have so far: int fd[2]; pid_t pid = fork(); if (pid == -1) return -1; if (pid == 0) { dup2(0, fd[0]); execlp("sort", "sort", NULL); } I am a novice programmer, as you can probably tell, and when I am programming something I don't know much about, this being obviously the case, I like to start with something really easy and concrete and then build from there. So, before being able to implement three and more different commands in pipeline, I would like to be able to compute "ls names.txt | sort" or something similiar, in which names.txt is a file of names alfabetically unordered. Thanks.

    Read the article

  • why egrep's stdout did not go through pipe?

    - by ccfenix
    Hi, i got a weird problem regarding egrep and pipe I tried to filter a stream containing some lines who start with a topic name, such as "TICK:this is a tick message\n" When I try to use egrep to filter it : ./stream_generator | egrep 'TICK' | ./topic_processor It seems that the topic_processor never receives any messages However, when i use the following python script: ./stream_generator | python filter.py --topics TICK | ./topic_processor everything looks to be fine. I guess there need to be a 'flush' mechanism for egrep as well, is this correct? Can anyone here give me a clue? Thanks a million import sys from optparse import OptionParser if __name__ == '__main__': parser = OptionParser() parser.add_option("-m", "--topics", action="store", type="string", dest="topics") (opts, args) = parser.parse_args() topics = opts.topics.split(':') while True: s = sys.stdin.readline() for each in topics: if s[0:4] == each: sys.stdout.write(s) sys.stdout.flush()

    Read the article

  • how do I paste text to a line by line text filter like awk, without having stdin echo to the screen?

    - by Barton Chittenden
    I have a text in an email on a windows box that looks something like this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same I want to extract the numbers, i.e. the first word on each line. I've got a terminal running bash open on my Linux box... If these were in a text file, I would do this: awk '{print $1}' mytextfile.txt I would like to paste these in, and get my numbers out, without creating a temp file. my naive first attempt looked like this: $ awk '{print $1}' 100 some random text 100 101 some more random text 101 102 lots of random text, all different 103 lots of random text, all the same 102 103 The buffering of stdin and stdout make a hash of this. I wouldn't mind if stdin all printed first, followed by all of stdout; this is what would happen if I were to paste into 'sort' for example, but awk and sed are a different story. a little more thought gave me this: open two terminals. Create a fifo file. Read from the fifo on one terminal, write to it on another. This does in fact work, but I'm lazy. I don't want to open a second terminal. Is there a way in the shell that I can hide the text echoed to the screen when I'm passing it in to a pipe, so that I paste this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same but see this? $ awk '{print $1}' 100 101 102 103

    Read the article

  • write to fifo/pipe from shell, with timeout

    - by Tim
    I have a pair of shell programs that talk over a named pipe. The reader creates the pipe when it starts, and removes it when it exits. Sometimes, the writer will attempt to write to the pipe between the time that the reader stops reading and the time that it removes the pipe. reader: while condition; do read data <$PIPE; do_stuff; done writer: echo $data >>$PIPE reader: rm $PIPE when this happens, the writer will hang forever trying to open the pipe for writing. Is there a clean way to give it a timeout, so that it won't stay hung until killed manually? I know I can do #!/bin/sh # timed_write <timeout> <file> <args> # like "echo <args> >> <file>" with a timeout TIMEOUT=$1 shift; FILENAME=$1 shift; PID=$$ (X=0; # don't do "sleep $TIMEOUT", the "kill %1" doesn't kill the sleep while [ "$X" -lt "$TIMEOUT" ]; do sleep 1; X=$(expr $X + 1); done; kill $PID) & echo "$@" >>$FILENAME kill %1 but this is kind of icky. Is there a shell builtin or command to do this more cleanly (without breaking out the C compiler)?

    Read the article

  • NamedPipeClientStream StreamReader problem in C++

    - by Chris Porter
    When reading from a NamedPipes server using the .net NamedPipeClientStream class I can only get the data on the first read in C++, every time it's just an empty string. In c# it works every time. pipeClient = gcnew NamedPipeClientStream(".", "Server_OUT", PipeDirection::In); try { pipeClient->Connect(); } catch(TimeoutException^ e) { // swallow } StreamReader^ sr = gcnew StreamReader(pipeClient); String^ temp; while (temp = sr->ReadLine()) { // = sr->ReadLine(); Console::WriteLine("Received from server: {0}", temp); } sr->Close();

    Read the article

  • Unix Piping using Fork and Dup

    - by Jacob
    Lets say within my program I want to execute two child processes, one to to execute a "ls -al" command and then pipe that into "wc" command and display the output on the terminal. How can I do this using pipe file descriptors so far the code I have written: An example would be greatly helpful int main(int argc, char *argv[]) { int pipefd[2] pipe(pipefd2); if ((fork()) == 0) { dup2(pipefd2[1],STDOUT_FILENO); close(pipefd2[0]); close(pipefd2[1]); execl("ls", "ls","-al", NULL); exit(EXIT_FAILURE); } if ((fork()) == 0){ dup2(pipefd2[0],STDIN_FILENO); close(pipefd2[0]); close(pipefd2[1]); execl("/usr/bin/wc","wc",NULL); exit(EXIT_FAILURE); } close(pipefd[0]); close(pipefd[1]); close(pipefd2[0]); close(pipefd2[1]); }

    Read the article

  • Trouble with piping through sed

    - by Joel
    I am having trouble piping through sed. Once I have piped output to sed, I cannot pipe the output of sed elsewhere. wget -r -nv http://127.0.0.1:3000/test.html Outputs: 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/test.html [99/99] -> "127.0.0.1:3000/test.html" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/robots.txt [83/83] -> "127.0.0.1:3000/robots.txt" [1] 2010-03-12 04:41:48 URL:http://127.0.0.1:3000/shop [22818/22818] -> "127.0.0.1:3000/shop.29" [1] I pipe the output through sed to get a clean list of URLs: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' Outputs: http://127.0.0.1:3000/test.html http://127.0.0.1:3000/robots.txt http://127.0.0.1:3000/shop I would like to then dump the output to file, so I do this: wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' > /tmp/DUMP_FILE I interrupt the process after a few seconds and check the file, yet it is empty. Interesting, the following yields no output (same as above, but piping sed output through cat): wget -r -nv http://127.0.0.1:3000/test.html 2>&1 | grep --line-buffered -v ERROR | sed 's/^.*URL:\([^ ]*\).*/\1/g' | cat Why can I not pipe the output of sed to another program like cat?

    Read the article

  • Difference between piping a file to sh and calling a shell file

    - by Peter Coulton
    This is what was trying to do: $ wget -qO- www.example.com/script.sh | sh which quietly downloads the script and prints it to stdout which is then piped to sh. This unfortunately doesn't quite work, failing to wait for user input a various points, aswell as a few syntax errors. This is what actually works: $ wget -qOscript www.example.com/script.sh && chmod +x ./script && ./script But what's the difference? I'm thinking maybe piping the file doesn't execute the file, but rather executes each line individually, but I'm new to this kind of thing so I don't know.

    Read the article

  • Web service can't open named pipe - access denied

    - by Patrick
    Hi All, I've got a C++ service which provides a named pipe to clients with a NULL SECURITY_ATTRIBUTES as follows: hPipe = CreateNamedPipe( lpszPipename, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE, 0, NULL); There is a dll which uses this pipe to get services. There is a c# GUI which uses the dll and works fine. There is a .net web site which also uses this dll (the exact same one on the same PC) but always gets permission denied when it tries to open the pipe. Any one know why this might happen and how to fix it? Also does anyone know of a good tutorial on SECURITY_ATTRIBUTES because I haven't understood the msdn info yet. Thanks, Patrick

    Read the article

  • Email pipe to php script working only sometimes

    - by Rixius
    I have a php pipe script that takes an attached *.csv from an email and saves and parses it. When the email is sent from where it is supposed to be coming from, it silently errors, however, when I take that same email and resend it from my address it goes through just fine. is there any simple reason it could be doing this?

    Read the article

  • how to feed a file to telnet

    - by knittl
    hello community, understanding http and headers i played around with telnet to send requests. to not type everything again and again and again i thought i'd write a small textfile with all the commands i need. my file is as simple as follows: GET /somefile.php HTTP/1.1 Host: localhost i then try to feed it to telnet with io-redirection: $ telnet localhost 80 < telnet.txt but all output i get is Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. what am i doing wrong?

    Read the article

  • Writing/Reading struct w/ dynamic array through pipe in C

    - by anrui
    I have a struct with a dynamic array inside of it: struct mystruct{ int count; int *arr; }mystruct_t; and I want to pass this struct down a pipe in C and around a ring of processes. When I alter the value of count in each process, it is changed correctly. My problem is with the dynamic array. I am allocating the array as such: mystruct_t x; x.arr = malloc( howManyItemsDoINeedToStore * sizeof( int ) ); Each process should read from the pipe, do something to that array, and then write it to another pipe. The ring is set up correctly; there's no problem there. My problem is that all of the processes, except the first one, are not getting a correct copy of the array. I initialize all of the values to, say, 10 in the first process; however, they all show up as 0 in the subsequent ones. for( j = 0; j < howManyItemsDoINeedToStore; j++ ){ x.arr[j] = 10; } Initally: 10 10 10 10 10 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 After Proc 1: 9 10 10 10 15 After Proc 2: 0 0 0 0 0 After Proc 3: 0 0 0 0 0 After Proc 4: 0 0 0 0 0 After Proc 5: 0 0 0 0 0 Now, if I alter my code to, say, struct mystruct{ int count; int arr[10]; }mystruct_t; everything is passed correctly down the pipe, no problem. I am using READ and WRITE, in C: write( STDOUT_FILENO, &x, sizeof( mystruct_t ) ); read( STDIN_FILENO, &x, sizeof( mystruct_t ) ); Any help would be appreciated. Thanks in advance!

    Read the article

  • Google Maps show location based on user inputs

    - by Kiran Badi
    Hi I have an web application where in I have 4 fields in the form, like streetname,nearest street,zip,state and country.Based on this I need to show the location of this address in the google maps.I have to implement this functionality for GoogleMaps/Bing/ and Yahoo maps. Can someone point to correct api's for these.This is my first implementation of maps,so need some inputs.Appreciate if someone can point me to right direction.

    Read the article

  • pipe multiple files (gz) into c program,

    - by monkeyking
    Ive written a cprogram that works when i pipe data into my program using stdin like gunzip -c IN.gz|./a.out If I want to run my program on a list of files I can do something like for i `cat list.txt` do gunzip -c $i |./a.out done But this will start my program 'number of files' times. I'm interested in piping all the files into the same process run. Like doing for i `cat list.txt` do gunzip -c $i >>tmp done cat tmp |./a.out thanks.

    Read the article

  • How do I check if my program has data piped into it.

    - by monkeyking
    Im writing a program that should read input via stdin, so I have the following contruct. FILE *fp=stdin; But this just hangs if the user hasn't piped anything into the program, how can I check if the user is actually piping data into my program like gunzip -c file.gz |./a.out #should work ./a.out #should exit program with nice msg. thanks

    Read the article

  • QLocalSocket and QLocalServer in browser plugins

    - by kambamsu
    Hi, I have a simple doubt. Does the ipc mechanism in qt work when we use it for developing browser plugins? The reason i ask this is that I can easily get the QLocalSocket and QLocalServer communication to work in a qt application, but when i write a similar piece of code in a browser plugin dll i see that the server does not accept a new connection at all. This is what i do in the server: server = new QLocalServer(this); if( !server->listen("myServer")) { writeFile("Listen failed"); } connect(server, SIGNAL(newConnection()), this, SLOT(handleConn()),Qt::QueuedConnection); and this is what i do in the client: client = new QLocalSocket(this); client->abort(); QObject::connect(client,SIGNAL(connected()),this,SLOT(connClient()),Qt::QueuedConnection); client->connectToServer("myServer"); after i call connectToServer, my client emits the connected() signal and the connClient() slot is called. But, on the server side, there is no signal emitted. It doesn't seem to be receiving any connection at all. Any help would be appreciated. Thanks

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >