Search Results

Search found 1964 results on 79 pages for 'kill ring'.

Page 7/79 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Remote kill, upload, execute file

    - by Masoud M.
    I'm developing a program and I need to upload my xyz.exe file to many host machines and execute them frequently. I need a server-client tool to do it as below steps after an update signal from my PC: Those host machine should kill any running processes with name xyz.exe. Download my new xyz.exe. Then execute new xyz.exe. I know about some tools like PsExec, but I need a tools with better user-interface and more powerful. Is there any tool to do it ? UPDATE: The systems are in a same LAN, OS is windows (XP or 7), No full remote access is needed. I'm a developer and my program should run in remote hosts and I'm testing my application.

    Read the article

  • How to kill volume when headphones pulled out

    - by wonea
    I often use headphones in order to listen to music in work time. However my chair tends to occasionally have the tendency to yank the headphone cable from my laptop. Now with my Android phone this wouldn't be a problem as the music will automatically stop playing. However, Windows does not kill the volume and instead re-routes the sound to my laptop internal speaker system. Aside from turning off my laptop speakers - is there a small utility available which serves my purpose of killing the volume so my work associates aren't inflicted with erroneous musical tastes?

    Read the article

  • Can't kill process TGitCache.exe

    - by ProfKaos
    Sometimes, I suspect when I open a music folder during the right moon phase and during a leap microsecond, this process crashes and pops up an error reporting dialogue. I decline to report the error, because that also fails by now, and choose Exit. Exit just delays the re-appearance of the error reporting dialogue for about 2 seconds. If I try and kill the process using SysInternals' Process Explorer the process is just restarted, only to crash again. So, I'm pretty sure another process, probably a service because TGitCache doesn't have a parent process and no other Git processes are visible, is keeping tabs on this process and restarting it if it dies. This is cruel and inhuman, but how can I find which nanny process is prolonging the agony?

    Read the article

  • Kill ssh background process after disconnect / timeout?

    - by keflavich
    I frequently use ssh tunnels to access VNC sessions on remote machines, but this is on my laptop so the connections break when I put it to sleep for the night. If I then try to re-open the connection in the morning, I have to manually kill the ssh session, otherwise I get this error: bind: Address already in use channel_setup_fwd_listener: cannot listen to port: 1202 Could not request local forwarding. The SSH command I'm using is this: ssh -N -C -f -L 1202:localhost:5900 name@server What's the best way to have the ssh tunnel die when it disconnects? Or reset?

    Read the article

  • Kill program after it outputs a given line, from a shell script

    - by Paul
    Background: I am writing a test script for a piece of computational biology software. The software I am testing can take days or even weeks to run, so it has a recover functionality built in, in the case of system crashes or power failures. I am trying to figure out how to test the recovery system. Specifically, I can't figure out a way to "crash" the program in a controlled manner. I was thinking of somehow timing a SIGKILL instruction to run after some amount of time. This is probably not ideal, as the test case isn't guaranteed to run the same speed every time (it runs in a shared environment), so comparing the logs to desired output would be difficult. This software DOES print a line for each section of analysis it completes. Question: I was wondering if there was a good/elegant way (in a shell script) to capture output from a program and then kill the program when a given line/# of lines is output by the program?

    Read the article

  • Error running bash script - No matching processes

    - by Bashity
    I am trying to kill Xcode by running killall Xcode.app, which works normally when I run it through terminal. However, if I put it into a bash script that I keep on my Desktop called re_xcode, the script will output the following error. Please can you tell me where I am going wrong? No matching processes belonging to you were found The file /Users/Max/Desktop/Applications/Xcode.app does not exist. #!/bin/bash killall Xcode.app open ./Applications/Xcode.app

    Read the article

  • Killing all processes of current user

    - by Vi
    user@host$ killall -9 -u user Will it definitely kill all processes owned by user (including forkbombs)? No new processes is spawned to user from other users. No user's processes are in D-sleep and unkillable. No processes are trying to detect and ptrace or terminate this started killall. E.g. if killall will finish untampered and successfully is it 100% that no processes are left with this uid?

    Read the article

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

  • Killing a process that keeps respawning

    - by terabytest
    I got infected by a virus. It looks like I removed it, but it somehow injected a few more processes (I can see them in the task manager) that respawn when I kill them (somehow). Is there a way to destroy those process to stop them from respawning, or in the case something else is respawning them, to kill that "something"?

    Read the article

  • Ctrl-C not killing process in CMD.EXE

    - by jtl999
    I've had this issue for a while even after reinstalling. Issue happens after I reinstall all my programs and not in a fresh Windows install (obviously). Might have to spin up a VM and install each program 1 by 1. I suspect it's Git for Windows with it's mini Cygwin wrapper causing this issue. Anyway the issue is basically pressing Ctrl-C does not kill the running process. However when I run cmd.exe or Git Bash or administrator Ctrl-C works great again. Disabling UAC seems to break it again. I've made a video of the issue here. Many thanks.

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

  • timeout duration on linux

    - by user1319451
    I'm trying to run a command for 5 hours and 10 minuts. I found out how to run it for 5 hours but I'm unable to run it for 5 hours and 10 minuts.. timeout -sKILL 5h mplayer -dumpstream http://82.201.100.23:80/slamfm -dumpfile slamfm.mp3 runs fine. But when I try timeout -sKILL 5h10m mplayer -dumpstream http://82.201.100.23:80/slamfm -dumpfile slamfm.mp3 I get this error timeout: invalid time interval `5h10m' Does anyone know a way to run this command for 5 hours and 10 minuts and then kill it?

    Read the article

  • How to kill mysql process through C#.

    - by deepesh khatri
    I am getting "too many connections" problem in an Asp .Net Mvc application which get fix when i manually kill process through Mysql v6.56 IDE, But on remote hosting computer where i can't kill process each time how can i fix this error. I have tried making a connection to information_schema DB's PROCESSLIST table but when connection is about to execute command there comes an error "access denied of root@loclahostto information_schema". I also have tried to grant all privileges to root@loclahost but still i am not able to fix this problem. I have been coding the same way from last two years but in this application i am getting this problem i have use close each connection in every method. Please if some one have ever got this problem or know the answer.Please help me. Thanx in advance

    Read the article

  • abandon session in asp.net on browser close..kill session cookie

    - by Tuviah
    So I have a website where I use session start and end events to track and limit open instances of our web application, even on the same computer. On page unload i call a session enabled page method which then called session.abandon. This triggers session end event and clears the session variable but unfortunately does not kill the session cookie!! as a result if other browser instances are open there are problems because their session state just disappeared...and much worse than this when I open the site again with the zombie session still not expired, I get multiple session start and session end events on any subsequent postbacks. This happens on all browsers. so how do I truly kill the session (force the cookie to expire)

    Read the article

  • Kill a Perl system call after a timeout

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • How to know start and kill processes within Java code (or C or Python) on *nix

    - by recipriversexclusion
    I need to write a process controller module on Linux that handles tasks, which are each made up of multiple executables. The input to the controller is an XML file that contains the path to each executable and list of command line parameters to be passed to each. I need to implement the following functionality: Start each executable as an independent Be able to kill any of the created processes independent of the others In order to do (2), I think I need to capture the pid when I create a process, to issue a system kill command. I tried to get access to pid in Java but saw no easy way to do it. All my other logic (putting info about the tasks in DB, etc) is done in Java so I'd like to stick with that, but if there are solutions you can suggest in C, C++, or Python I'd appreciate those, too.

    Read the article

  • Way to kill python thread from inside thread?

    - by user859434
    I have some python code that currently performs expensive computation by performing the computation in parallel through many threads. For a given time period, many threads are created and started on the fly that share the same code which is explicitly stated within the run method of the thread. My question is how do I stop/kill a thread at the end of its run method? (the run is only called once) I need to do this in order to create more threads for the next batch of computation. #Example class someThread(threading.Thread): def __init__(self): #some init code def run(self): #Explicitly Stated Code without constant loops #Something performed to stop/kill this thread

    Read the article

  • monitor and kill runaway processes using 100% IO?

    - by bleomycin
    Hello everyone, i have a few processes that have to be run at high priority (chrt 98) that will occasionally decide to hard-lock and peg 1 core at 100% (not a huge deal) but more importantly it will use all the IO on a system, so much that its impossible to log into the machine via ssh to kill it or perform any task on the machine that isn't loaded into ram. If i happen to have something like htop already running i am able to end the process fine. Is there any type of utility/way to monitor for this type of runaway process and kill anything that uses 100% of system IO for more than X amount of time? Thanks!

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • Windows 7 XP Mode-Program not ending properly

    - by iceman33
    We currently have recently have implemented a few new machines to our network with Windows 7 Enterprise 64-bit installed on them. We have a program that is incompatible with Windows 7 right now and we have it installed on the Windows XP Mode that we have setup on there. There is a shortcut that is on the desktop to have it work with integration services and that part is working successfully. Occasionally, this program will stop working over the server on which it connects to has to get rebooted and the program has to get closed out. However, that process that is in the task manager doesn't seem to close out properly. So in order to correctly get the program shut down, we have to make the users log back into xp mode and do a Ctrl+Alt+Delete to kill the process or have to go back into the machine to perform a restart. I was wondering if anyone has come across a way within XP Mode yet that when the virtual machine goes into hibernation mode that it would shut down all processes or if when restarting the virtual machine your normal machine that it would shut everything down in the virtual XP mode as well and not just keep that program running? Any help would be greatly appreciated.

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Killing a process which ran for a lot of time or is using a lot of memory

    - by Vedant Terkar
    I am not sure whether this question belong to Stack Overflow or here, but here we go. I am designing a online 'C' compiler, which will compile and invoke the program if compilation succeeded. So here is code which I am using for that: $str=shell_exec("gcc path/to/file.c -o path/to/file.exe 2>&1"); if(file_exists("path/to/file.exe")){ $res=shell_exec("path/to/file.exe <inputfile 2>&1"); echo $res; } This Seems to work fine with simple program files. But When file.c That is the source code entered contains Infinite loop then This script crashes the server and utilizes a lot of memory and time. So here is my question: Is There any way to detect for how much time does the process file.exe is Running? How Much Space is Utilized by that process that is file.exe? Is There any way to kill the process file.exe if space and time utilization increases beyond certain limit? That Mean if we allocate time of 2.5sec and space of 40Mb at max for that process file.exe and if any one of those 2 constraints is violated then we should display appropriate error message to client Is it possible? I am Using WAMP (Windows 7).

    Read the article

  • Kill a node in dojo.dnd.source ?

    - by Soulhuntre
    Related to my SO issue at http://stackoverflow.com/questions/3010996/dojo-extending-dojo-dnd-source-move-not-happening-ideas/3012518#3012518 I am now almost done. I have a dnd.Source derived class - we can consider it a dnd.Source for now, that has within it a node that has a specific class. function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDropExternal: function(source, nodes, copy){ console.debug('onDropExternal called...'); // dojo.destroy(this.getAllNodes().query(".dndInstructions")); this.inherited(arguments); var x = source.getAllNodes().length; if( x == 0 ){ newnode = document.createElement('li'); newnode.innerHTML = "Hello!"; dojo.addClass(newnode,"dndInstructions"); source.node.appendChild(newnode); } return true; // return dojo.dnd.Source.prototype.onDropExternal.call(this, source, nodes, copy); } }); } You can see the place I mean from the dojo.destroy that is commented out because it was totally n00b :) If I do this var y = this.getAllNodes().query(".dndInstructions") the nodelist in y absolutely does contain the node. Now I need t kill it, nuke it - get it out of there. Out of the dnd.Source, out of the DOM... gone. Any ideas how to do it safely? It will be the ONLY node in the list at the time we do whatever it is we are goign to do to kill the thing. Thanks!

    Read the article

  • Any techniques to interrupt, kill, or otherwise unwind (releasing synchronization locks) a single de

    - by gojomo
    I have a long-running process where, due to a bug, a trivial/expendable thread is deadlocked with a thread which I would like to continue, so that it can perform some final reporting that would be hard to reproduce in another way. Of course, fixing the bug for future runs is the proper ultimate resolution. Of course, any such forced interrupt/kill/stop of any thread is inherently unsafe and likely to cause other unpredictable inconsistencies. (I'm familiar with all the standard warnings and the reasons for them.) But still, since the only alternative is to kill the JVM process and go through a more lengthy procedure which would result in a less-complete final report, messy/deprecated/dangerous/risky/one-time techniques are exactly what I'd like to try. The JVM is Sun's 1.6.0_16 64-bit on Ubuntu, and the expendable thread is waiting-to-lock an object monitor. Can an OS signal directed to an exact thread create an InterruptedException in the expendable thread? Could attaching with gdb, and directly tampering with JVM data or calling JVM procedures allow a forced-release of the object monitor held by the expendable thread? Would a Thread.interrupt() from another thread generate a InterruptedException from the waiting-to-lock frame? (With some effort, I can inject an arbitrary beanshell script into the running system.) Can the deprecated Thread.stop() be sent via JMX or any other remote-injection method? Any ideas appreciated, the more 'dangerous', the better! And, if your suggestion has worked in personal experience in a similar situation, the best!

    Read the article

  • Robustly killing Windows programs stuck reporting 'problems'

    - by grrussel
    I am looking for a means to kill a Windows exe program that, when being tested from a python script, crashes and presents a dialog to the user; as this program is invoked many times, and may crash repeatedly, this is not suitable. The problem dialog is the standard reporting of a Windows error: "Foo.exe has encountered a problem and needs to close. We are sorry for the inconvenience" and offers a Debug, Send Error Report, and Don't Send buttons. I am able to kill other forms of dialog resulting from crashes (e.g. a Debug build's assert failure dialog is OK.) I have tried taskkill.exe, pskill, and the terminate() function on the Popen object from the subprocess module that was used to invoke the .exe Has anyone encountered this specific issue, and found a resolution? I expect automating user input to select the window, and press the "Don't Send" button is one possible solution, but I would like something far simpler if possible

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >