Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 47/880 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How to get an application's process name?

    - by Ram
    Hi, I am trying to develop a sample application that finds the process name of a particular application.. Suppose there is an application by name XYZ.exe.. But when the XYZ.exe application is executed, it is not necessary that it holds the same process name.. Let the application run under the process name abc.exe.. Now my question is this.. Is it possible to find that the process name of XYZ.exe? Any help would much appreciated... Thanks, Ram

    Read the article

  • Clearcase and java process : changing view does not apply

    - by user1432310
    i have a simple application, which receives input from a user for a CC stream name, and is suppose to return the content of a specific file from this stream repository. I have tried doing this using a simple shell script: user enters stream name, java receives stream name, runs a process which runs a script "myccscript.sh" which contains "myinput=$1; cleartool setview $myinput" (or something like that). then i try reading the file and printing it's content in the java side. BUT, after the process is finished - the view is not the view from the user input - that environment was probably only valid for the process Ive created. how do i change the clearcase view to the main java process? Thanks!

    Read the article

  • python interpreter waits for child process to die

    - by Moulik Kallupalam
    Contents of check.py: from multiprocessing import Process import time import sys def slp(): time.sleep(30) f=open("yeah.txt","w") f.close() if __name__=="__main__" : x=Process(target=slp) x.start() sys.exit() In windows 7, from cmd, if I call python check.py, it doesn't immediately exit, but instead waits for 30 seconds. And if I kill cmd, the child dies too- no "yeah.txt" is created. How do I make ensure the child continues to run even if parent is killed and also that the parent doesn't wait for child process to end?

    Read the article

  • Is MSDN referencing a system.thread, a worker thread, an io thread or all three?

    - by w0051977
    Please see the warning below taken from the StreamWriter class specification (http://msdn.microsoft.com/en-us/library/system.io.streamwriter.aspx): "Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe." I understand that a W3WC process contains two thread pools i.e. worker threads and I/O threads. A worker thread could contain many threads of its own (if the application creates its own System.Thread instances). Does the warning only relate to System.Threads or does it relate to worker threads and I/O threads as well I.e. as the instance variables of the streamwriter class are not thread safe then does this mean that there would be problems if multiple worker threads access it eg if two users on two different web clients attempt to write to the log file at the same time, then could one lock out the other?

    Read the article

  • linux process scheduling delayed for long time

    - by Medicine
    I have done strace on my multi-threaded c++ application running on linux after couple hours of running, none of the threads got run, for about 12 seconds. I have seen that the unfinished select system call which is called with a timeout was unfinished before the thread was suspended, reported after it resumed that, it took 11.x seconds for the operation to finish. This is clear indication that the process got starved for a long time. All threads in the process are created with default scheduling policy(SCHED_OTHER) of linux and default priority. There are another 5 similar apps running on the same box which are also heavy I/O bound like this app due to heavy data received on the socket. But most of the time, this app is getting scheduled delay. The other apps are created with same sched policy and priority as this i.e. the defaults. why is only this process gets blocked almost all of the time? Could it be because this process is more I/O intensive as in more busy due to may be higher rates of data? So, the linux dynamic priority adjusting in play here which pushed this process down?

    Read the article

  • process.waitFor()

    - by ashwani66476
    Hello Team, I am using the below code in my application ... Process process = Runtime.getRuntime().exec( "perl " + perlScript + " " + project + " " + fileName); : : : result = process.waitFor(); : : and this result gives the code 2 every time.....while running the application. what could be the reason for the "reason code" ??? Thanks In Advance

    Read the article

  • How are PID's generated?

    - by Helltone
    On *nix, PIDs are unique identifiers for running processes. How are PID's generated? Is it just an integer which gets incremented or a more complex structure such as a list? How do they get recycled? By recycling I mean that, when a process terminates, it's PID will eventually be reused by another process.

    Read the article

  • How to stop a random ramp in FCGI Processes Killing the server

    - by Andy Main
    So got the below earlier to day... Around that time the logs show a ramp in processes(600) and associated memory (1.2g), cpu usage load average (80) untill the server gave out. Server had to be hard reset by host as there was no ssh or plesk panel access. Fast CGI is configured as below and is setup for one high use site. As I understand it FcgidMaxProcesses 20 should protect against what happen but has not. I've read many forums with differing answers and references to many different fcgi directives, but have found nothing conclusive. Any one got some definitive answers on how to stop this sort of server process ramping and subsequent server failure? If you need more info let me know. Cheers Andy  /var/log/apache2/error_log [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17650 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17649 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17644 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17643 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17638 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17633 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17627 graceful kill fail, sending SIGKILL [Thu May 17 07:40:47 2012] [warn] mod_fcgid: process 17622 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17674 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17673 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17672 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17667 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17666 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17665 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17664 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17659 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17658 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17657 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17656 graceful kill fail, sending SIGKILL [Thu May 17 07:40:51 2012] [warn] mod_fcgid: process 17651 graceful kill fail, sending SIGKILL https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VRmFLWEZfR2VBb2M https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VWTcwZEhoV2Fqejg https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VUUtVWWFINHZjZ0U https://docs.google.com/a/thesugarrefinery.com/open?id=0B_XbpWChge0VZEVMclh6ZUdaOUE <IfModule mod_fcgid.c> <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl </IfModule> FcgidIPCDir /var/lib/apache2/fcgid/sock FcgidProcessTableFile /var/lib/apache2/fcgid/shm FcgidIdleTimeout 40 FcgidProcessLifeTime 30 FcgidMaxProcesses 20 FcgidMaxProcessesPerClass 20 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 30 FcgidIOTimeout 120 FcgidInitialEnv RAILS_ENV production FcgidIdleScanInterval 10 FcgidMaxRequestLen 1073741824 </IfModule>

    Read the article

  • Finegrain Performance Reporting on svchost.exe

    - by Randolpho
    This is something that's always bothered me, so I'll ask the serverfault community. I love me some Process Explorer for keeping track of more than just the high-level tasks you get in the Task Manager. But I constantly want to know which of those dozen services hosted in a single process under svchost is making my processor spike. So... is there any non-intrusive way to find this information out?

    Read the article

  • How can I unbind a UDP port that has no entry in lsof?

    - by Chocohound
    On my Mac, I have a UDP port that is "already in use", but doesn't have an associated process: sudo netstat -na | grep "udp.*\.500\>" shows udp4 0 0 192.168.50.181.500 *.* udp4 0 0 192.168.29.166.500 *.* sudo lsof doesn't show a process on port 500 (ie sudo lsof -i:500 -P reports nothing). Note I'm using 'sudo' on both commands so it should show all processes. (rebooting works, but looking for something less disruptive) How can I unbind port 500 so I can use it again?

    Read the article

  • Windows 7 x64 cannot kill Skype

    - by NullOrEmpty
    Skype got stuck, and Windows was unable to kill the process even when the UI had disappeared. I had to restart the computer to get Skype again working. Running as administrator: C:\Windows\system32>tasklist | find "Skype" Skype.exe 2708 Console 1 92,328 K C:\Windows\system32>taskkill.exe /pid 2708 /F /T SUCCESS: The process with PID 2708 has been terminated. C:\Windows\system32>tasklist | find "Skype" Skype.exe 2708 Console 1 92,328 K How can this be even possible? Cheers.

    Read the article

  • Is there a simple way to daemon-ify a simple task?

    - by Jonas Byström
    I ssh into a server than start a job (for instance rsync), then I just want to be able to log out from the server and let the job run its course. But if I just do rsync ... & I think it's still connected to the tty in some way, and that the job dies when the tty goes away when logging out. Is there any (easy) way to disconnect the process from the tty to be able to log out without the process terminating?

    Read the article

  • git-daemon fails on VM suspend and resume

    - by fuzzy lollipop
    I have Gitorious running on a Centos 5.3 install on a VMWare virtual machine under VMWare Server. Everytime we take down the server via suspend to back up the image, and resume the VM, the git-daemon dies. All my other processes continue to function without any problems, this one process dies and has to be manually be restarted. Does anyone have any ideas why this might be happening, or how to make sure this process never dies off?

    Read the article

  • How do I prevent tampering with AJAX process page? [closed]

    - by whamsicore
    I am using Ajax for processing with JQUERY. The Data_string is sent to my process.php page, where it is saved. Issue: right now anyone can directly type example.com/process.php to access my process page, or type example.com/process.php/var1=foo1&var2=foo2 to emulate a form submission. How do I prevent this from happening? Also, in the Ajax code I specified POST. What is the difference here between POST and GET?

    Read the article

  • Why - Could not find worker with name 'jk-manager' in uri map post processing?

    - by Hardbone
    I am using apache2 + mod_jk(ajp protocol) + tomcat7. but I always get the error below: [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [info] init_jk::mod_jk.c (3365): mod_jk/1.2.37 initialized [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [error] extension_fix::jk_uri_worker_map.c (564): Could not find worker with name 'jk-manager' in uri map post processing. [Sat Mar 30 17:30:54.691 2013] [25238:3074365824] [error] extension_fix::jk_uri_worker_map.c (564): Could not find worker with name 'jk-status' in uri map post processing. Any clue?

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Is there a good host for console applications?

    - by Jamey McElveen
    We have several applications that run in AppHarbor that we are bringing in-house. Our worker processes for AppHarbor are developed as windows console applications. What is the best way to host these console application in house? Of course I can just install and run them but I am hoping for a more managed way to host them. I want them to restart if they fail, ability to run multiple applications. These are things that were managed by AppHarbor. Thanks

    Read the article

  • Virtual PC duplication process

    - by Toddintr
    This is the process I use for duplicating a Virtual PC (on Windows 7): 1 - Create a new VPC. 2 - Install Windows 7 on the new VPC. 3 - Configure the new Windows 7 installation (install Windows updates, install applications, etc). 4 - Run Sysprep on the new VPC. 5 - Shut down the new VPC. 6 - Make a copy of the new VPC's VHD file. 7 - Create a new VPC, specify "use existing VHD file" in the wizard and provide the name of the copied VHD file. Above works fine but there is one point that threw me off: During the OOBE for the duplicated VPC, when asked for a user name, I had to specify a different user name than the one I had specified for the base VPC. This makes sense because the copied VPC already has that user name. But what I did not understand is why I was asked for a new user name at all? Is it because it is part of the OOBE process and when the OOBE was designed by Microsoft, they did not think of the fact that base OS images could be copied? Thanks - -Todd

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >