Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 142/880 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Visual Studio Plug-in that can tell the Application Pool name of w3wp.exe when debugging

    - by Colin Niu
    Is there any plug-in for Visual Studio that can display the associated Application Pool name for those w3wp processes when debugging them with "Attach to Process..." ? Usually I have to do following steps before debugging: c: \Windows\system32\inetsrv\appcmd list wps then I get the process id for the Application Pool I want to debug, and then attach it in the Attach to Process window. I feel it will be very pleasure if there's a plug in can do this automatically, but didn't find any such thing after Googled.

    Read the article

  • Run bat file in Java and wait 2

    - by Savvas Dalkitsis
    This is a followup question to my other question : http://stackoverflow.com/questions/2434125/run-bat-file-in-java-and-wait The reason i am posting this as a separate question is that the one i already asked was answered correctly. From some research i did my problem is unique to my case so i decided to create a new question. Please go read that question before continuing with this one as they are closely related. Running the proposed code blocks the program at the waitFor invocation. After some research i found that the waitFor method blocks if your process has output that needs to be proccessed so you should first empty the output stream and the error stream. I did those things but my method still blocks. I then found a suggestion to simply loop while waiting the exitValue method to return the exit value of the process and handle the exception thrown if it is not, pausing for a brief moment as well so as not to consume all the CPU. I did this: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test { public static void main(String[] args) { try { Process p = Runtime.getRuntime().exec( "cmd /k start SQLScriptsToRun.bat" + " -UuserName -Ppassword" + " projectName"); final BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); final BufferedReader error = new BufferedReader(new InputStreamReader(p.getErrorStream())); new Thread(new Runnable() { @Override public void run() { try { while (input.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); new Thread(new Runnable() { @Override public void run() { try { while (error.readLine()!=null) {} } catch (IOException e) { e.printStackTrace(); } } }).start(); int i = 0; boolean finished = false; while (!finished) { try { i = p.exitValue(); finished = true; } catch (IllegalThreadStateException e) { e.printStackTrace(); try { Thread.sleep(500); } catch (InterruptedException e1) { e1.printStackTrace(); } } } System.out.println(i); } catch (IOException e) { e.printStackTrace(); } } } but my process will not end! I keep getting this error: java.lang.IllegalThreadStateException: process has not exited Any ideas as to why my process will not exit? Or do you have any libraries to suggest that handle executing batch files properly and wait until the execution is finished?

    Read the article

  • PHP forking and mysql database connection problem

    - by user298819
    I am now trying to do forking in php. I would like to do some query and update in child process.. the problem is that whenever a child process finish, it close the connection which makes the other queries fail. The following is my sample code!! #!/usr/local/bin/php <?php set_time_limit(0); # forever program! $db = mysql_connect("server","user","pwd"); mysql_select_db("schema",$db); $sql = "query"; $res = mysql_query($sql,$db); while($rows = mysql_fetch_array($res)) { $rv = pcntl_fork(); if($rv == -1){ echo "forking failed"; }elseif($rv){ echo "parent process $rv\n"; $db = mysql_connect("192.168.8.112","zwmuser","zwmuser",true); mysql_select_db("schema",$db); }else{ echo "child process $rv\n"; $sql1 = "another query"; $res1 = mysql_query($sql1,$db); while($messages = mysql_fetch_array($res1)) { $sql2 = "update query"; mysql_query($sql2,$db); } exit(0); //it terminates both child process and mysql connection! } } ?>

    Read the article

  • How can I extract a value from comma separated values in Perl?

    - by Octopus
    I have a log file containing statistics from different servers. I am separating the statistics from this log file using regex only. I am trying to capture the CPU usage from the running process. For SunOS, I have below output: process,10050,user1,218,59,0,1271M,1260M,sleep,58.9H,0.02%,java Here the CPU % is at 11th field if we separate by commas (,). To get this value I am using below regex: regex => q/^process,(?:.*?),((?:\d+)\.(?:\d+))%,java$/, For the linux system I have below output: process,26190,user1,20,0,1236m,43m,6436,S,0.0,1.1,0:00.00,java, Here the CPU usage is at 10th column. What regex pattern should I use to get this value?

    Read the article

  • Python 3.1.1 Problem With Tuples

    - by Protean
    This piece of code is supposed to go through a list and preform some formatting to the items, such as removing quotations, and then saving it to another list. class process: def rchr(string_i, asciivalue): string_o = () for i in range(len(string_i)): if ord(string_i[i]) != asciivalue: string_o += string_i[i] return string_o def flist(self, list_i): cache = () cache_list = [] for line in list_i: cache = line.split('\t') cacbe[0] = process.rchr(str(cache[0]), 34) cache_list.append(cache[0]) cache_list[index] = cache index += 1 cache_list.sort() return cache_list p = process() list1a = ['cow', 'dog', '"sheep"'] list1 = p.flist(list1a) print (country_list) However; it chokes at 'string_o += string_i[i]' and gives the following error: Traceback (most recent call last): File "/Projects/Python/safafa.py", line 23, in <module> list1 = p.flist(list1a) File "/Projects/Python/safafa.py", line 14, in flist cacbe[0] = process.rchr(str(cache[0]), 34) File "/Projects/Python/safafa.py", line 7, in rchr string_o += string_i[i] TypeError: can only concatenate tuple (not "str") to tuple

    Read the article

  • Where can I find WebSphere configuration files?

    - by Nicholas Key
    Hello Stackoverflow'ers, I would like to know where are the WebSphere configuration details saved? Specifically, configuration details that are shown in the Administrative Console (from the web) or from the console using wsadmin. Some of the examples would be: Java and Process Management: Class loader, Process definition, Process execution Container Settings: Session management, SIP Container Settings, Web Container Settings, Portlet Container Settings Are there XML files that persist these configuration details? Nicholas

    Read the article

  • Problem with signal handlers being called too many times [closed]

    - by Hristo
    how can something print 3 times when it only goes the printing code twice? I'm coding in C and the code is in a SIGCHLD signal handler I created. void chld_signalHandler() { int pidadf = (int) getpid(); printf("pidafdfaddf: %d\n", pidadf); while (1) { int termChildPID = waitpid(-1, NULL, WNOHANG); if (termChildPID == 0 || termChildPID == -1) { break; } dll_node_t *temp = head; while (temp != NULL) { printf("stuff\n"); if (temp->pid == termChildPID && temp->type == WORK) { printf("inside if\n"); // read memory mapped file b/w WORKER and MAIN // get statistics and write results to pipe char resultString[256]; // printing TIME int i; for (i = 0; i < 24; i++) { sprintf(resultString, "TIME; %d ; %d ; %d ; %s\n",i,1,2,temp->stats->mboxFileName); fwrite(resultString, strlen(resultString), 1, pipeFD); } remove_node(temp); break; } temp = temp->next; } printf("done printing from sigchld \n"); } return; } the output for my MAIN process is this: MAIN PROCESS 16214 created WORKER PROCESS 16220 for file class.sp10.cs241.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld MAIN PROCESS 16214 created WORKER PROCESS 16221 for file class.sp10.cs225.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld and the output for the MONITOR process is this: MONITOR: pipe is open for reading MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs241.mbox MONITOR: end of readpipe ( I've taken out repeating lines so I don't take up so much space ) Thanks, Hristo

    Read the article

  • Fail to analyze core dump with GDB when main.elf is dynamically linked (uses shared libs)

    - by dscTobi
    Hi all. I'm trying to analyze core dump, but i get following result: GNU gdb 6.6.0.20070423-cvs Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=mipsel-linux --target=mipsel-linux-uclibc". (gdb) file main.elf Reading symbols from /home/tobi/main.elf...Reading symbols from /home/tobi/main.dbg...done. done. (gdb) core-file /srv/tobi/core warning: .dynamic section for "/lib/libpthread.so.0" is not at the expected address (wrong library or version mismatch?) Error while mapping shared library sections: /lib/libdl.so.0: No such file or directory. Error while mapping shared library sections: /lib/librt.so.0: No such file or directory. Error while mapping shared library sections: /lib/libm.so.0: No such file or directory. Error while mapping shared library sections: /lib/libstdc++.so.6: No such file or directory. Error while mapping shared library sections: /lib/libc.so.0: No such file or directory. warning: .dynamic section for "/lib/libgcc_s.so.1" is not at the expected address (wrong library or version mismatch?) Error while mapping shared library sections: /lib/ld-uClibc.so.0: No such file or directory. Reading symbols from /lib/libpthread.so.0...done. Loaded symbols for /lib/libpthread.so.0 Symbol file not found for /lib/libdl.so.0 Symbol file not found for /lib/librt.so.0 Symbol file not found for /lib/libm.so.0 Symbol file not found for /lib/libstdc++.so.6 Symbol file not found for /lib/libc.so.0 Reading symbols from /lib/libgcc_s.so.1...done. Loaded symbols for /lib/libgcc_s.so.1 Symbol file not found for /lib/ld-uClibc.so.0 warning: Unable to find dynamic linker breakpoint function. GDB will be unable to debug shared library initializers and track explicitly loaded dynamic code. Core was generated by 'root/main.elf'. Program terminated with signal 11, Segmentation fault. #0 0x0046006c in NullPtr (parse_p=0x2ac9dc80, result_sym_p=0x13e3d6c "") at folder/my1.c:1624 1624 *ptr += 13; (gdb) bt #0 0x0046006c in NullPtr (parse_p=0x2ac9dc80, result_sym_p=0x13e3d6c "") at folder/my1.c:1624 #1 0x0047a31c in fn1 (line_ptr=0x2ac9dd18 "ccore_null_pointer", target_ptr=0x13e3d6c "", result_ptr=0x2ac9dd14) at folder/my2.c:980 #2 0x0047b9d0 in fn2 (macro_ptr=0x0, rtn_exp_ptr=0x0) at folder/my3.c:1483 /... some functions .../ #8 0x2aab7f9c in __nptl_setxid () from /lib/libpthread.so.0 Backtrace stopped: frame did not save the PC (gdb) thread apply all bt Thread 159 (process 1093): #0 0x2aac15dc in _Unwind_GetCFA () from /lib/libpthread.so.0 #1 0x2afdfde8 in ?? () warning: GDB cant find the start of the function at 0x2afdfde8. GDB is unable to find the start of the function at 0x2afdfde8 and thus cant determine the size of that functions stack frame. This means that GDB may be unable to access that stack frame, or the frames below it. This problem is most likely caused by an invalid program counter or stack pointer. However, if you think GDB should simply search farther back from 0x2afdfde8 for code which looks like the beginning of a function, you can increase the range of the search using the set heuristic-fence-post command. Backtrace stopped: previous frame inner to this frame (corrupt stack?) Thread 158 (process 1051): #0 0x2aac17bc in pthread_mutexattr_getprioceiling () from /lib/libpthread.so.0 #1 0x2aac17a0 in pthread_mutexattr_getprioceiling () from /lib/libpthread.so.0 Backtrace stopped: previous frame identical to this frame (corrupt stack?) Thread 157 (process 1057): #0 0x2aabf908 in ?? () from /lib/libpthread.so.0 #1 0x00000000 in ?? () Thread 156 (process 1090): #0 0x2aac17bc in pthread_mutexattr_getprioceiling () from /lib/libpthread.so.0 #1 0x2aac17a0 in pthread_mutexattr_getprioceiling () from /lib/libpthread.so.0 Backtrace stopped: previous frame identical to this frame (corrupt stack?) Thread 155 (process 1219): #0 0x2aabf908 in ?? () from /lib/libpthread.so.0 #1 0x00000000 in ?? () Thread 154 (process 1218): #0 0x2aabfb44 in connect () from /lib/libpthread.so.0 #1 0x00000000 in ?? () Thread 153 (process 1096): #0 0x2abc92b4 in ?? () warning: GDB cant find the start of the function at 0x2abc92b4. #1 0x2abc92b4 in ?? () warning: GDB cant find the start of the function at 0x2abc92b4. Backtrace stopped: previous frame identical to this frame (corrupt stack?) Thread 152 (process 1170): #0 0x2aabfb44 in connect () from /lib/libpthread.so.0 #1 0x00000000 in ?? () If i make main.elf statically linked everything is OK and i can see bt of all threads. Any ideas?

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • what perl regex should I use to get value from line

    - by Octopus
    I am trying to capture the cpu usage from the running process. For SunOS, I have below output process,10050,user1,218,59,0,1271M,1260M,sleep,58.9H,0.02%,java here the cpu % is at 11th field if we separate by comma(,). To get this value I am using below regex regex => q/^process,(?:.*?),((?:\d+)\.(?:\d+))%,java$/, but for the linux system I have below output. process,26190,user1,20,0,1236m,43m,6436,S,0.0,1.1,0:00.00,java, here the cpu usage is at 10th column What regex pattern should i use to get this value. Appreciate for any suggestion.

    Read the article

  • Python learner needs help spotting an error

    - by Protean
    This piece of code gives a syntax error at the colon of "elif process.loop(i, len(list_i) != 'repeat':" and I can't seem to figure out why. class process: def loop(v1, v2): if v1 < v2 - 1: return 'repeat' def isel(chr_i, list_i): for i in range(len(list_i)): if chr_i == list_i[i]: return list_i[i] elif process.loop(i, len(list_i) != 'repeat': return 'error'()

    Read the article

  • Append a dynamically changing watermark to a PDF in SharePoint

    - by ccomet
    This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is. Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change. One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose. What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process? Thanks in advance!

    Read the article

  • Problem with signal handlers

    - by Hristo
    how can something print 3 times when it only goes the printing code twice? I'm coding in C and the code is in a SIGCHLD signal handler I created. void chld_signalHandler() { int pidadf = (int) getpid(); printf("pidafdfaddf: %d\n", pidadf); while (1) { int termChildPID = waitpid(-1, NULL, WNOHANG); if (termChildPID == 0 || termChildPID == -1) { break; } dll_node_t *temp = head; while (temp != NULL) { printf("stuff\n"); if (temp->pid == termChildPID && temp->type == WORK) { printf("inside if\n"); // read memory mapped file b/w WORKER and MAIN // get statistics and write results to pipe char resultString[256]; // printing TIME int i; for (i = 0; i < 24; i++) { sprintf(resultString, "TIME; %d ; %d ; %d ; %s\n",i,1,2,temp->stats->mboxFileName); fwrite(resultString, strlen(resultString), 1, pipeFD); } remove_node(temp); break; } temp = temp->next; } printf("done printing from sigchld \n"); } return; } the output for my MAIN process is this: MAIN PROCESS 16214 created WORKER PROCESS 16220 for file class.sp10.cs241.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld MAIN PROCESS 16214 created WORKER PROCESS 16221 for file class.sp10.cs225.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld and the output for the MONITOR process is this: MONITOR: pipe is open for reading MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs241.mbox MONITOR: end of readpipe ( I've taken out repeating lines so I don't take up so much space ) Thanks, Hristo

    Read the article

  • Clojure agents consuming from a queue

    - by erikcw
    I'm trying to figure out the best way to use agents to consume items from a Message Queue (Amazon SQS). Right now I have a function (process-queue-item) that grabs an items from the queue, and processes it. I want to process these items concurrently, but I can't wrap my head around how to control the agents. Basically I want to keep all of the agents busy as much as possible without pulling to many items from the Queue and developing a backlog (I'll have this running on a couple of machines, so items need to be left in the queue until they are really needed). Can anyone give me some pointers on improving my implementation? (def active-agents (ref 0)) (defn process-queue-item [_] (dosync (alter active-agents inc)) ;retrieve item from Message Queue (Amazon SQS) and process (dosync (alter active-agents dec))) (defn -main [] (def agents (for [x (range 20)] (agent x))) (loop [loop-count 0] (if (< @active-agents 20) (doseq [agent agents] (if (agent-errors agent) (clear-agent-errors agent)) ;should skip this agent until later if it is still busy processing (not sure how) (send-off agent process-queue-item))) ;(apply await-for (* 10 1000) agents) (Thread/sleep 10000) (logging/info (str "ACTIVE AGENTS " @active-agents)) (if (> 10 loop-count) (do (logging/info (str "done, let's cleanup " count)) (doseq [agent agents] (if (agent-errors agent) (clear-agent-errors agent))) (apply await agents) (shutdown-agents)) (recur (inc count)))))

    Read the article

  • How about the Asp.net processes and threads and apppools?

    - by Michel
    Hi, as i understand, when i load a asp.net .aspx page on the (iis)server, it's processed via the w3p.exe process. But when iis gets multiple requests, are they all processed by the same w3p process? And does this process automaticly use all my processors and cores? And after that: when i start i new thread in my page, this thread still works when the pages is already served to the client. Where does this thread live? also in the w3p.exe process? And what if i assign another apppool to my site, what does that do? Michel

    Read the article

  • "Mem Usage" higher than "VM Size" in WinXP Task Manager

    - by Frederick
    In my Windows XP Task Manager, some processes display a higher value in the Mem Usage column than the VMSize. My Firefox instance, for example shows 111544 K as mem usage and 100576 K as VMSize. According to the help file of Task Manager Mem Usage is the working set of the process and VMSize is the committed memory in the Virtual address space. My question is, if the number of committed pages for a process is A and the number of pages in physical memory for the same process is B, shouldn't it always be B = A? Isn't the number of pages in physical memory per process a subset of the committed pages? Or is this something to do with sharing of memory among processes? Please explain. (Perhaps my definition of 'Working Set' is off the mark). Thanks.

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • Using Java to retrieve the CPU Usage for Window's Processes

    - by stjowa
    Hello all, I am looking for a Java solution to finding the CPU usage for a running process in Windows. After looking around the web, there seems to be little information on a solution in Java. Keep in mind, I am not looking to find the CPU usage for the JVM, but any process running in Windows at the time. I am able to retrieve the memory usage in Java by using the exec("tasklist.exe ... ") to retrieve and parse process information. Although there is an aggregate CPU cycle timer for each process, I do not see a CPU usage column. Any help would be greatly appreciated. Also, if possible, I would like to stay away from C libraries; however, if there is no other alternative, a solution by that means would be appropriate. Thanks a lot, Steve

    Read the article

  • grep for value of keyvaue pair and format

    - by imerez
    When I do the following ps -aef|grep "asdf" I get a list of processes that are running. Each one of my process has the following text in the output: -ProcessName=XXXX I'd like to be able to format the out put so all I get is: The following processes are running: Process A Process B etc..

    Read the article

  • How to get forkpty to handle redirection and other bash-isms?

    - by Jeremy Friesner
    Hi all, I've got a GUI C++ program that takes a shell command from the user, calls forkpty() and execvp() to execute that command in a child process, while the parent (GUI) process reads the child process's stdout/stderr output and displays it in the GUI. This all works nicely (under Linux and MacOS/X). For example, if the user enters "ls -l /foo", the GUI will display the contents of the /foo folder. However, bash niceties like output redirection aren't handled. For example, if the user enters "echo bar /foo/bar.txt", the child process will output the text "bar /foo/bar.txt", instead of writing the text "bar" to the file "/foo/bar.txt". Presumably this is because execvp() is running the executable command "echo" directly, instead of running /bin/bash and handing it the user's command to massage/preprocess. My question is, what is the correct child process invocation to use, in order to make the system behave exactly as if the user had typed in his string at the bash prompt? I tried wrapping the user's command with a /bin/bash invocation, like this: /bin/bash -c the_string_the_user_entered, but that didn't seem to work. Any hints?

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Connect two client sockets

    - by Hernán Eche
    Good morning, let's say Java has two kind of sockets... server sockets "ServerSocket" client sockets or just "Socket" ////so Simple ! Imagine the situation of two processes: X Client <-- Y Server The server process Y : has a "ServerSocket", that is listening to a TCP port The client process X : send a connection request through a -client type- "Socket" X ////so Simple ! then the accept() method (in server Y) returns a new client type "Socket", when it occurs, great the two Sockets get "interconected", so the -client socket- in client process, is connected with the -client socket- in the server process then (reading/writing in socket X is like reading/writing in socket Y, and viceversa. ) TWO CLIENT SOCKETS GET INTERCONECTED!! ////so Simple ! BUT... (there is always a But..) What if I create the two CLIENT sockets in same process, and I want to get them "interconected" ? ////mmm Complex =(... even posible? Let's say how to have TWO CLIENT SOCKETS GET INTERCONECTED WITHOUT using an intermediate ServerSocket ? I 've solved it.. by creating two threads for continuously reading A and writing B, and other for reading B and writng A... but I think could be a better way..(or should!) (Those world-energy-consuming threads are not necessary with the client-server aproach) Any help or advice would be appreciated!! Thanks

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • how to create simulator for web application for load test and stress test

    - by girish
    i m developing a web application but...now i need to create simulator for the same...that will be able to re-run the process that has been done on website... let's say i m developing a auction site where user's bid on product.... during these process the number of user's bid on the same product and at the end one user buy the product... now what i want is.. i want to record this process or any thing so that i can run the process for the same again so that i can test the load and the stress on web application and the database server.. Thank you.

    Read the article

  • Automated fake mailbox

    - by Bernabé Panarello
    Hello, i'm about to start the development of a new automated-email application. The idea is that customers (or other external users) send emails to a mailbox and then an automated process will read them, extract their information and insert it into some database. It's a requirement that the emails have an standard format in order to be parsed (standard subject, etc.). The obvious thing to do would be to set up a process that periodically pools an ordinary mailbox, through pop-3 for example, processing the messages it finds. However, it would be for me much nicer to be able to process the emails as they arrive. I was wondering then, is there any way to set up a process that acts as a fake email-box? do you know about any open-source implementation of something like that I can extend? I would prefer something already written in c#. Thanks in advance for your help, bernabé

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >