Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 51/880 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Killing a subprocess including its children from python

    - by user316664
    Hi, I'm using the subprocess module on python 2.5 to spawn a java program (the selenium server, to be precise) as follows: import os import subprocess display = 0 log_file_path = "/tmp/selenium_log.txt" selenium_port = 4455 selenium_folder_path = "/wherever/selenium/lies" env = os.environ env["DISPLAY"] = ":%d.0" % display command = ["java", "-server", "-jar", 'selenium-server.jar', "-port %d" % selenium_port] log = open(log_file_path, 'a') comm = ' '.join(command) selenium_server_process = subprocess.Popen(comm, cwd=selenium_folder_path, stdout=log, stderr=log, env=env, shell=True) This process is supposed to get killed once the automated tests are finished. I'm using os.kill to do this: os.killpg(selenium_server_process.pid, signal.SIGTERM) selenium_server_process.wait() This does not work. The reason is that the shell subprocess spawns another processfor the java program, and the pid of that process is unknown to my python code. I've tried killing the process group with os.killpg, but that kills also the python process which runs this code in the first place. Setting shell to false, thus avoiding the java program to run inside a shell environment, is also out of the question, due to other reasons. Does anyone have an idea how I can kill the shell and any other processes generated by it? Cheers, Ulas

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Apache Connection vs. Request

    - by user101570
    I apologize in advance if this is a basic question, but I am quite confused after reading the Apache documentation and other tutorials. Does a single Apache prefork process serve all HTTP requests for a given client? That's what I thought, but when I reduce maxclients down to a low number, my page load times go to a crawl. This despite the fact I'm the only client on the server in question. This would suggest each process serves a single HTTP request at a time, rather than serving all requests within the TimeOut window. So if a single webpage requires 15 HTTP requests to load fully, do I require 15 prefork Apache processes to optimally serve it?

    Read the article

  • Pass variables between separate instances of ruby (without writing to a text file or database)

    - by boulder_ruby
    Lets say I'm running a long worker-script in one of several open interactive rails consoles. The script is updating columns in a very, very, very large table of records. I've muted the ActiveRecord logger to speed up the process, and instruct the script to output some record of progress so I know how roughly how long the process is going to take. That is what I am currently doing and it would look something like this: ModelName.all.each_with_index do |r, i| puts i if i % 250 ...runs some process... r.save end Sometimes its two nested arrays running, such that there would be multiple iterators and other things running all at once. Is there a way that I could do something like this and access that variable from a separate rails console? (such that the variable would be overwritten every time the process is run without much slowdown) records = ModelName.all $total = records.count records.each_with_index do |r, i| $i = i ...runs some process... r.save end meanwhile mid-process in other console puts "#{($i/$total * 100).round(2)}% complete" #=> 67.43% complete I know passing global variables from one separate instance of ruby to the next doesn't work. I also just tried this to no effect as well unix console 1 $X=5 echo {$X} #=> 5 unix console 2 echo {$X} #=> "" Lastly, I also know using global variables like this is a major software design pattern no-no. I think that's reasonable, but I'd still like to know how to break that rule if I'd like. Writing to a text file obviously would work. So would writing to a separate database table or something. That's not a bad idea. But the really cool trick would be sharing a variable between two instances without writing to a text file or database column. What would this be called anyway? Tunneling? I don't quite know how to tag this question. Maybe bad-idea is one of them. But honestly design-patterns isn't what this question is about.

    Read the article

  • Automatically kill processes that over time uses 95%+ of resources? Ubuntu

    - by data_jepp
    I don't know about you're computer but when mine is working properly no process is sucking 95%+ over time. I would like to have some failsafe that kills any processes behaving like that. This comes to mind because when I woke up this morning my laptop had been crunching all night long on a stray chromium child process. This can probably be done as a cron job, but before I make it a full time job creating something like this I'd thought I should check here. :) I hate reinventing the wheel.

    Read the article

  • Nagios Creating lots of zombie process

    - by pradeepchhetri
    In my monitoring box, I have lots of zombie process created by nagios and they gets remove quickly also. I am using active checks to perform monitoring of my servers. I accumulated the defunct processes created using the following command: $ top -d 0.25 -b -n 20 > topout.txt This collected the output of top with 0.25s delay 20 times. I did grep on the topout.txt for the defunct process. $ cat topout.txt | grep defunct I get the following output. 8957 nagios 20 0 0 0 0 Z 6.0 0.0 0:00.02 nagios <defunct> 8951 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8954 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8945 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8946 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8980 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9000 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.00 nagios <defunct> 9024 nagios 20 0 0 0 0 Z 7.0 0.0 0:00.02 nagios <defunct> 9025 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.01 nagios <defunct> 9040 nagios 20 0 0 0 0 Z 3.1 0.0 0:00.01 nagios <defunct> 9086 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9087 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9123 nagios 20 0 0 0 0 Z 6.1 0.0 0:00.02 nagios <defunct> 9126 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9131 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9091 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.05 nagios <defunct> 9111 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9119 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9118 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9151 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9153 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9150 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9164 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9171 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9154 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9156 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9163 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9167 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9178 nagios 20 0 0 0 0 Z 3.8 0.0 0:00.02 nagios <defunct> 9174 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9179 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9182 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> Can somebody help me in finding out the reason of these zombie processes and how i can prevent these zombie processes ?

    Read the article

  • System.ComponentModel.Win32Exception Message: Not enough storage is available to process this comman

    - by george9170
    short of restarting our server, is there anyway we can get this memory released and our website up and running. Below is a trace from the event viewer An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/17/ROOT Process ID: 14352 Exception: System.ComponentModel.Win32Exception Message: Not enough storage is available to process this command StackTrace: at MS.Win32.UnsafeNativeMethods.RegisterClassEx(WNDCLASSEX_D wc_d) at MS.Win32.HwndWrapper..ctor(Int32 classStyle, Int32 style, Int32 exStyle, Int32 x, Int32 y, Int32 width, Int32 height, String name, IntPtr parent, HwndWrapperHook[] hooks) at MS.Win32.MessageOnlyHwndWrapper..ctor() at System.Windows.Threading.Dispatcher..ctor() at System.Windows.Threading.Dispatcher.get_CurrentDispatcher() at ISC.MapDotNetServer.MapPrintSupport.BaseTileRequestorResolver.b__0() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Android: How to receive process signals in an activity to kill child process ?

    - by user355993
    My application calls Runtime.exec() to launch an executable in a separate process at start up time. I would like this child process to get killed when the parent activity exits. Now I can use onDestroy() to handle regular cases, but not "Force quit", shutdowns from DDMS, or kill from the console since those don't run onDestroy(). The addShutdownHandler() does not seem to be invoked in these cases either. Is there any other hook or signal handler that informs my activity that it's about to get terminated ? As an alternative is there a way to have the system automatically kill the child process when the parent dies ?

    Read the article

  • Make a service monitor and restart process

    - by Andrej
    Hi, i'm making an app that needs to be up and running at all times. (24/7) I'm not very expirienced with services, but I read on the internet that services can be made uncolasble by setting their "onclose" property to false. I have got the service monitoring my app, and the service can't be closed directly from the task manager services window... but, when I click "go to process" task manager leads me to the process the service spawned. From there I can close the process and instantly close the service. Since I don't have much expirience with services i'm wondering, is this behavior normal? If not, how to make the service unstopable?

    Read the article

  • Accessing one process's memory contents from another module

    - by Fangkai Yang
    I am developing a virtual device driver such that the user can write to the driver a process' pid and a virtual address, and the module will use these two values to get the memory contents of the target process. I am wondering if there is any easy functions that can fetch user page's data at this virtual address. I have tried get_user but this is not possible because the modules executing get_user at another process's context. I also tried to use ptrace_readdata, however, it seems that the file at /kernel/ptrace.c leaves a function access_process_vm undefined and also I don't know how to compile the source code of my module with this file (the linker seaches file in /linux/include by default). I am wondering if there are any other solutions...

    Read the article

  • System.Diagnostics.Process - Del Command

    - by wonea
    I'm trying to start the del command using System.Diagnostic.Process. Basically I want to delete everything from the C:\ drive that has the filename of *.bat System.Diagnostics.Process proc = new System.Diagnostics.Process(); string args = string.Empty; args += "*.bat"; proc.StartInfo.FileName = "del"; proc.StartInfo.WorkingDirectory = "C:\"; proc.StartInfo.Arguments = args.TrimEnd(); proc.Start(); However when code is ran an exception is thrown, "the system cannot find specified file." I know there definitely is files in that root folder containing that file extension.

    Read the article

  • Perl - How to use a process Handle created in a Module in another Perl Script

    - by Zwik
    Ultimately, what I want to do is to start a process in a module and parse the output in real time in another script. What I want to do : Open a process Handler (IPC) Use this attribute outside of the Module How I'm trying to do it and fail : Open the process handler Save the handler in a module's attribute Use the attribute outside the module. Code example : #module.pm self->{PROCESS_HANDLER}; sub doSomething{ ... open( self->{PROCESS_HANDLER}, "run a .jar 2>&1 |" ); ... } #perlScript.pl my $module = new module(...); ... $module->doSomething(); ... while( $module->{PROCESS_HANDLER} ){ ... }

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

  • jquery repeat process once get reply from php file

    - by air
    i have one php file which process adding of record in Database fro array. for example in array i have 5 items aray an='abc','xyz','ert','wer','oiu' i want to call one php file in j query ajax method um = an.split(','); var counter = 0; if(counter < uemail.length) { $("#sending_count").html("Processing Record "+ ecounter +" of " + an.length); var data = {uid: um[counter] $.ajax({ type: "POST", url: "save.php", data: data, success: function(html){ echo "added"; counter++; } what it do, it complete all the prcess but save.php is still working what i want after one process it stop untill process of save.php complete then it wait for next 10 sec and start adding of 2nd element. Thanks

    Read the article

  • generate EF model by System.Diagnostics.Process

    - by loviji
    Hello, after read this article i tried generate EF model by System.Diagnostics.Process: Process myProcess = new Process(); var cs = "Data Source=.\\SQLEXPRESS; Initial Catalog=uqs; Integrated Security=SSPI"; myProcess.StartInfo.FileName = @"C:\Windows\Microsoft.NET\Framework\v3.5\EdmGen.exe"; myProcess.StartInfo.Arguments = "/mode:fullgeneration /c:"+cs+" project:School /entitycontainer:SchoolEntities /namespace:SchoolModel /language:CSharp "; myProcess.Start(); 1.but i haven't get a result, because i can't do well formed arguments string. As I tried, there have many quotes. how to organize argument string? 2.how can i catch if generation completed or have occurred errors?

    Read the article

  • Android process killer

    - by Martin
    I have similar question, maybe you can help. Is it possible to get list of all process which are running in the Android system, and kill some of them? I know that there are some applications (task managers), but I would like to write my own, simple application. I would like to write simple task manager, just list of all process and button which will kill some of them. Could you just write some Java methods which I can call in order to get list of process, and method for killing them. Or just give me some advices. Thanks for answers. Regards Martin

    Read the article

  • Access COM object through a windows process handle.

    - by Sivvy
    I'm currently automating an application at work using COM, and have an issue where anyone using my application has a problem if the original application is already open when my application runs. I know how to locate the process if it's open, but instead of having to worry about closing it, or working around it, etc., I want to try to use the existing application instead of opening a new one. This is how I normally start the application in my automation program: Designer.Application desApp = new Designer.Application(); Now I'm attempting to try and use the handle from an existing application: Designer.Application desApp = (Designer.Application)((System.Diagnostics.Process.GetProcessesByName("Designer.exe")[0]).Handle) (I know this doesn't work, since .Handle returns an IntPtr, but I'm using it as an example.) Is there any way to accomplish this? How do I return a usable object if I know the handle/process?

    Read the article

  • How To Get The Outout of Win32::Process command in perl

    - by rockyurock
    hello all, i am using "use Win32::Process" for my application run as below, it runs fine but i did not get any way to get the output to a .txt file. i used NORMAL_PRIORITY_CLASS rather than CREATE_NEW_CONSOLE to get the output on the same terminal itself but i don't know how to redirect it to a txt file. /rocky !/usr/bin/perl use strict; use warnings; use Win32::Process; Win32::Process::Create(my $ProcessObj, "iperf.exe", "iperf.exe -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); my @command_output; push @command_output,$ProcessObj; open FILE, "zz.txt" or die $!; print FILE @command_output; close FILE; sleep 10; $ProcessObj-Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • How to determine Windows.Diagnostics.Process from ServiceController

    - by Alex
    This is my first post, so let me start by saying HELLO! I am writing a windows service to monitor the running state of a number of other windows services on the same server. I'd like to extend the application to also print some of the memory statistics of the services, but I'm having trouble working out how to map from a particular ServiceController object to its associated Diagnostics.Process object, which I think I need to determine the memory state. I found out how to map from a ServiceController to the original image name, but a number of the services I am monitoring are started from the same image, so this won't be enough to determine the Process. Does anyone know how to get a Process object from a given ServiceController? Perhaps by determining the PID of a service? Or else does anyone have another workaround for this problem? Many thanks, Alex

    Read the article

  • Process-wide hook using SetWindowsHookEx

    - by mfya
    I need to inject a dll into one or more external processes, from which I also want to intercept keybord events. That's why using SetWindowsHookEx with WH_KEYBOARD looks like an easy way to achieve both things in a single step. Now I really don't want to install a global hook when I'm only interested in a few selected processes, but Windows hooks seem to be either global or thread-only. My question is now how I would properly go about setting up a process-wide hook. I guess one way would be to set up the hook on the target process' main thread from my application, and then doing the same from inside my dll on DLL_PROCESS_ATTACH for all other running threads (plus on DLL_THREAD_ATTACH for threads started later). But is this really a good way? And more important, aren't there any simpler ways to setup process-wide hooks? My idea looks quite cumbersome und ugly, but I wasn't able to find any information about doing this anywhere.

    Read the article

  • cant login using System.Diagnostics.Process.Start

    - by omair iqbal
    i am tying to login using System.Diagnostics.Process.Start private void button1_Click(object sender, EventArgs e) { System.Diagnostics.Process.Start("iexplore","[email protected]","password","http://www.gmail.com"); } but visual studio gives me these 2 errors: Error 1 The best overloaded method match for 'System.Diagnostics.Process.Start(string, string, System.Security.SecureString, string)' has some invalid arguments C:\Documents and Settings\Omair\My Documents\Visual Studio 2008\Projects\WindowsFormsApplication3\WindowsFormsApplication3\Form1.cs 21 13 WindowsFormsApplication3 and Error 2 Argument '3': cannot convert from 'string' to 'System.Security.SecureString' C:\Documents and Settings\Omair\My Documents\Visual Studio 2008\Projects\WindowsFormsApplication3\WindowsFormsApplication3\Form1.cs 21 80 WindowsFormsApplication3 note: i am brand new to c# and reletively new to the world of programming sorry for my english

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • Calling https process from ASP Net

    - by David M
    I have an ASP NET web server application that calls another process running on the same box that creates a pdf file and returns it. The second process requires a secure connection via SSL. The second process has issued my ASP NET application with a digital certificate but I still cannot authenticate, getting a 403 error. The code is a little hard to show but here's a simplified method ... X509Certificate cert = X509Certificate.CreateFromCertFile("path\to\cert.cer"); string URL = "https://urltoservice?params=value"; HttpWebRequest req = HttpWebRequest.Create(URL) as HttpWebRequest; req.ClientCertificates.Add(cert); req.Credentials = CredentialCache.DefaultCredentials; req.PreAuthenticate = true; /// error happens here WebResponse resp = req.GetResponse(); Stream input = resp.GetResponseStream(); The error text is "The remote server returned an error: (403) Forbidden." Any pointers are welcome.

    Read the article

  • How to send argument securely using Process class?

    - by Sebastian
    Hello, I'm using System.Diagnostics.Process to execute an svn command from a windows console application. This is the configuration of the process: svn.StartInfo.FileName = svnPath; svn.StartInfo.Arguments = string.Format("copy {0}/trunk/ {0}/tags/{1} -r head -q --username {3} --password {4} -m \"{2}\"", basePathToRepo, tagName, message, svnUserName, svnPassword); svn.StartInfo.UseShellExecute = false; svn.Start(); svn.WaitForExit(); My problem is that those arguments, which include the svn credentials, are sent (I suppose) in an unsecure way. Is there a way to send these arguments in a secure way using the Process class? Thanks!

    Read the article

  • How to control a subthread process in python?

    - by SpawnCxy
    Code first: '''this is main structure of my program''' from twisted.web import http from twisted.protocols import basic import threading threadstop = False #thread trigger,to be done class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.start() def run(self): while True: if threadstop: return dosomething() '''def some function''' if __name__ == '__main__': from twisted.internet import reactor t = MyThread() reactor.listenTCP(serverport,myHttpFactory()) reactor.run() As my first multithread program,I feel happy that it works as expected.But now I find I cannot control it.If I run it on front,Control+C can only stop the main process,and I can still find it in processlist;if I run it in background,I have to use kill -9 pid to stop it.And I wonder if there's a way to control the subthread process by a trigger variale,or a better way to stop the whole process other than kill -9.Thanks.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >