Search Results

Search found 52375 results on 2095 pages for 'process id'.

Page 4/2095 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • Cannot Kill Process in Vista 64

    - by JanSolo
    Hi I have a weird situation where a Zombie process is causing my Vista64 Dev machine to become useless. I use Incredibuild 3.40 to distribute builds of a large software product that I work on. Occasionally, a build will fail and a Zombie process is left behind. The process holds a handle to a file that is needed by the build system to retry the build. Since I cannot kill the process, the handle remains open and I cannot build my code at all. I've tried TargetManager and ProcessExplorer, but neither can kill this process. It gets worse; since Vista cannot kill all its processes, my PC refuses to shut down correctly, requiring a hard reboot after each failed build. Is there a way to really-really-kill a process in Vista? Or maybe a way to force a file handle to close? Any help is appreciated. Cheers Jan. EDIT: This is still occurring. I've used Lockhunter (which appears to successfully unlock the file handle), but retrying the build still fails because the (now unlocked) file cannot be deleted. Explorer and Lockhunter both fail to delete the file. LockHunter also tells me that there are no processes that hold handles to it. Basically, nothing owns it, but you still cant delete it. This sucks.

    Read the article

  • handle exit event of child process

    - by Ehsan
    I have a console application and in the Main method. I start a process like the code below, when process exists, the Exist event of the process is fired but it closed my console application too, I just want to start a process and then in exit event of that process start another process. It is also wired that process output is reflecting in my main console application. Process newCrawler = new Process(); newCrawler.StartInfo = new ProcessStartInfo(); newCrawler.StartInfo.FileName = configSection.CrawlerPath; newCrawler.EnableRaisingEvents = true; newCrawler.Exited += new EventHandler(newCrawler_Exited); newCrawler.StartInfo.Arguments = "someArg"; newCrawler.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; newCrawler.StartInfo.UseShellExecute = false; newCrawler.Start();

    Read the article

  • C# Process Exited event not firing from within webservice

    - by davidpizon
    I am attempting to wrap a 3rd party command line application within a web service. If I run the following code from within a console application: Process process= new System.Diagnostics.Process(); process.StartInfo.FileName = "some_executable.exe"; // Do not spawn a window for this process process.StartInfo.CreateNoWindow = true; process.StartInfo.ErrorDialog = false; // Redirect input, output, and error streams process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.RedirectStandardInput = true; process.EnableRaisingEvents = true; process.ErrorDataReceived += (sendingProcess, eventArgs) => { // Make note of the error message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.WarningMessageEvent != null) this.WarningMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.OutputDataReceived += (sendingProcess, eventArgs) => { // Make note of the message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.Exited += (object sender, EventArgs e) => { // Make note of the exit event if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited")); }; process.Start(); process.StandardInput.Close(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); process.WaitForExit(); int exitCode = process.ExitCode; process.Close(); process.Dispose(); if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited with code: " + exitCode)); All events, including the "process.Exited" event fires as expected. However, when this code is invoked from within a web service method, all events EXCEPT the "process.Exited" event fire. The execution appears to hang at the line: process.WaitForExit(); Would anyone be able to shed some light as to what I might be missing?

    Read the article

  • Process Close does not close a process when performed in a SetTimer label

    - by NbdNnm
    The script below creates a child process to perform FileExist() separately from the main script so that it avoids halting the script when an unknown network path is passed. However, the command, Process close, does not close the given process despite its ErrorLevel indicates it succeeded to close. Task Manager shows the child process still exists. FileExistTimeout() ; this checks if the necessary parameters are passed ; this checks if the given path exists by creating a child process with the given timeout PathToCheck := "\\1.2.3.4\test" msgbox, 64, Test Result, % FileExistTimeout(PathToCheck) "`n" ErrorLevel FileExistTimeout(strPath="", nTimeout=1000, strSwitch="/fe") { global strParam1 = %1% strParam2 = %2% ; if the function is used to check the script parameters if (strPath="") { if (strParam2 = strSwitch) && strParam1 ; this means the function is placed at the beginning of the script. ExitApp % (FileExist(strParam1) != "") + 1 ; Return 1 or 2. Return } ; create a child process to check the path tooltip, RunWait is performing.... SetTimer, FileExistTimeout, % -1 * nTimeout RunWait "%A_AhkPath%" "%A_ScriptFullPath%" "%strPath%" "%strSwitch%",,, nPID SetTimer, FileExistTimeout, Off ; Disable the timer (if it hasn't fired already). tooltip ; ErrorLevel contains the exit code of the process (from RunWait). if ErrorLevel = 2 ; found return true else if ErrorLevel = 1 ; not found return return ; timeout FileExistTimeout: Process Close, %nPID% ; Timed out - terminate the process. tooltip, % "Process Closed: " ErrorLevel "`n" "Used Pid: " nPID,,, 2 return }

    Read the article

  • Comparing an id to id of different tables rows mysql

    - by jett
    So I am trying to retrieve all interests from someone, and be able to list them. This works with the following query. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE interest_id = '.$site->db->clean($_POST['showinterest_id']).' ) ORDER BY lastname, firstname In this one which I am having trouble with, I want to select only those who happen to have their id in the table named volleyballplayers. The table just has an id, person_id, team_id, and date fields. SELECT *,( SELECT GROUP_CONCAT(interest_id SEPARATOR ",") FROM people_interests WHERE person_id = people.id ) AS interests FROM people WHERE id IN ( SELECT person_id FROM people_interests WHERE volleyballplayers.person_id = person_id ) ORDER BY lastname, firstname I just want to make sure that only the people who are in the volleyballplayers table show up, but I am getting an error saying that Unknown column 'volleyballplayers.person_id' in 'where clause' although I am quite sure of the name of table and I know the column is named person_id.

    Read the article

  • Cannot login to Ubuntu 14.04 returns to the login screen

    - by Safder
    I updated to 14.04 from 12.04.03. I was using cinammon at that time. When I upgraded to 14.04 ,I got a serious problem. I was unable to login, as whenever I hit enter after entering password, it is taking me straight to login screen again.I unable to login like Guest even. So, here is my .xsession-errors file. possibly anyone could help. Thanks in advance. Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: gnome-settings-daemon main process (14717) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.1000.crash) main process (14607) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.1000.crash) main process (14623) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.1000.crash) main process (14663) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.1000.crash) main process (14620) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.1000.crash) main process (14612) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.1000.crash) main process (14616) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.1000.crash) main process (14629) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.1000.crash) main process (14625) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_update-notifier_system-crash-notification.1000.crash) main process (14662) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.127.crash) main process (14613) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.1000.crash) main process (14665) terminated with status 133 init: update-notifier-crash (/var/crash/linux-image-3.13.0-24-generic.0.crash) main process (14606) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_Xorg.0.crash) main process (14609) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.127.crash) main process (14624) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.127.crash) main process (14622) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.127.crash) main process (14618) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.127.crash) main process (14628) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.127.crash) main process (14664) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.127.crash) main process (14608) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.127.crash) main process (14634) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.127.crash) main process (14667) terminated with status 133 init: gnome-settings-daemon main process (14843) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-settings-daemon main process (14850) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-session (GNOME) main process (14727) killed by TRAP signal init: gnome-settings-daemon main process (14859) killed by TRAP signal init: logrotate main process (14570) killed by TERM signal init: upstart-dbus-session-bridge main process (14683) terminated with status 1 init: Disconnected from notified D-Bus bus

    Read the article

  • What quality standards to consider for software development process?

    - by Ron-Damon
    Hi, i'm looking for metrics/standards/normatives to evaluate a given "Software Development Process". I'm NOT looking to evaluate the SOFTWARE itself (trough SQUARE and such), i'm trying to evaluate software development PROCESS. So, my question is if you could give me some pointers to find this standard, considering that "evaluation objetives" would be documentation quality, how good is the customer relation, how efective is the process, etc. Very much like a ISO 9000, and like CMMI on a sense, but much lightweight and concrete and process oriented, not company oriented. Please help, i'm trying to stablish the advantages of our development process as formal as i can.

    Read the article

  • How to tell process id within Python

    - by R S
    Hey, I am working with a cluster system over linux (www.mosix.org) that allows me to run jobs and have the system run them on different computers. Jobs are run like so: mosrun ls & This will naturally create the process and run it on the background, returning the process id, like so: [1] 29199 Later it will return. I am writing a Python infrastructure that would run jobs and control them. For that I want to run jobs using the mosrun program as above, and save the process ID of the spawned process (29199 in this case). This naturally cannot be done using os.system or commands.getoutput, as the printed ID is not what the process prints to output... Any clues? Edit: Since the python script is only meant to initially run the script, the scripts need to run longer than the python shell. I guess it means the mosrun process cannot be the script's "son process". Any suggestions? Thanks

    Read the article

  • How does the caller ID works on voip?

    - by Cawas
    Maybe this depends on the protocol, if so, I'm wondering mostly about SIP. That's all, just focus on the title please, but for a little background... I am using voipraider and I was just trying to set it up to have a caller id as my phone number, since I can't have this voip of mine on all the time (thus using DID or being able to receive calls through voip wouldn't be a solution here). I could actually make it work, but only using voipraider software. From other places, the caller ID doesn't show properly. So, I am wondering how it actually works to be able to go and look for a fix for this. I want details about the protocol, if that's relevant.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Throttle CPU Usage consumed by Process

    - by Brett Powell
    We run a game-server company where we basically have large amounts of customers sharing a single machine, and are just on their own instance of a Java Process (Minecraft) managed by our Web Control Panels. In the last few game updates released, we have noticed that many of the third-party plugins our customer's use have become poorly written and we are frequently seeing huge CPU increases from certain servers until we manually kill the process. Our Game Panel automatically restarts processes, so killing them is not really an issue. Our problem is that once once of these servers starts consuming 50%+ CPU Usage, it takes atleast 5 minutes to RDP into the machine, locate who it belongs to, shut it down and notify them. Are there any current solutions for Server 2008 which allow for the throttling of CPU usage or worst case, just auto kill a process stuck using that much? As Minecraft is essentially a single-threaded application, we have investigated using Affinity, although with the variations in our Packages and fluctuations in usage, this doesn't work well for us. Some option to throttle the maximum usage a process can use would be perfect, or at least the option to kill a process using that much. Thanks!

    Read the article

  • kill process but fail

    - by Tim
    Hi, I am running a bash script as a background job. The bash script calls a time-consuming executable. If I am not wrong, the running of the bash script is the parent process and the running of the executable is the child process. I now want to stop the whole running by killing the parent process which is the background job kill -9 $(jobs -p) The terminal shows that the running of the bash script is killed. But the running of the executable still hangs on the output of top. I just wonder how to also kill the child process? Thanks and regards!

    Read the article

  • Process limit for user in Linux

    - by BrainCore
    This is the standard question, "How do I set a process limit for a user account in Linux to prevent fork-bombing," with an additional twist. The running program originates as a root-owned Python process, which then setuids/setgids itself as a regular user. As far as I know, at this point, any limits set in /etc/security/limits.conf do not apply; the setuid-ed process may now fork bomb. Any ideas how to prevent this?

    Read the article

  • Process limit for user in Linux

    - by BrainCore
    This is the standard question, "How do I set a process limit for a user account in Linux to prevent fork-bombing," with an additional twist. The running program originates as a root-owned Python process, which then setuids/setgids itself as a regular user. As far as I know, at this point, any limits set in /etc/security/limits.conf do not apply; the setuid-ed process may now fork bomb. Any ideas how to prevent this?

    Read the article

  • "pull" process/job into the background

    - by Mustafa Ismail Mustafa
    I know of terminating a command with & and then moving it into the background by pressing Ctrl-Z and then bg [pid], and I also know of nohup. But say you started a process that turned out to take much longer than one expected, is there a way of pulling, so to speak, this process from another terminal screen into the background so that even if I log off from the server the process would continue?

    Read the article

  • Logging off does not kill process in Windows Server 2003

    - by Suraj Chandran
    I have a Windows Server 2003(Enterprise, SP2). My understanding was that any process created by a user will be terminated when the user loggs off the account. But its not happening. I login via Administrator account. Start a simple java process and logoff. But the process is not killed. Is there any configuration for this or something? I am mostly a software programmer and not much in to servers and so I am stuck. I found out that while logging off, 1) Win32 is supposed to send a CTRL_LOGOFF_EVENT to all processes started by that user. 2) JVM is supposed to handle this event and terminate the VM. But I can't understand why my java process is not killed when i logoff. Any idea!!!

    Read the article

  • Show full process name in top

    - by Ben K.
    I'm running a rails stack on ubuntu. When I ps -AF, I get a descriptive process name set by the apache module like 00:00:43 Rails: /var/www... which is really helpful in diagnosing load issues. But when I top, the same process show up simply as ruby Is there any way to get the ps -AF process name in top?

    Read the article

  • usb device in dual mode on gentoo linux

    - by Idlecool
    i am having a flip flop usb modem which has two modes 1 usb mass storage mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fff5 ONDA Communication S.p.A. 2 usbserial mode: root@devbox:/media/F872F0FD72F0C184/Users/idlecool/Downloads# lsusb Bus 006 Device 003: ID 19d2:fffe ONDA Communication S.p.A. by default whenever i plug the modem to the usb port.. the linux machine recognize it as a usb mass storage device.. how can i make it load as usbserial device i have been using a package usb_modeswitch in the past on ubuntu 10.04 but i cannt install the same package on gentoo live cd.. even udev is not installed on live cd.. how to change the product-id of the usb device on gentoo live disc without udev.

    Read the article

  • Can a process be frozen temporarily in linux?

    - by Pal Szasz
    I was wondering if there is a way to freeze any process for a certain amount of time? What I mean is that: is it possible for one application (probably running as root) to pause the execution of another already running process (any process, both GUI and command line) and then resume it later? In other words I don't want certain processes to be scheduled by the linux scheduler for a certain amount of time.

    Read the article

  • Cant kill process on Windows Server 2008!! - Thread in Wait:Executive State

    - by adrian
    I hope someone can help me with our issue we are having. We have a major issue with a process that we can not kill and the only way to get rid of the process is to reboot the machine. I have tried killing it from the normal task manager but no joy. I have tried killing it using the taskkill /F command from a command prompt and no joy. The command reports as sucessful but the process remains. I have tried to start task manager with system rights by calling "psexec -s -i -d taskmgr" and attempting to kill the process but no joy I have tried killing it from Process Explorer but again the process remains. I have tried creating a scheduled task that runs under the SYSTEM name to kill the task but that also does not kill it : schtasks /create /ru system /sc once /st 13:16 /tn test1 /tr "taskkill /F /PID 1576" /it Nothing I do will kill this process. Even logging off and logging back on will not kill this process. Using Process Explorer I notice that there is on stubborn thread that is in the Wait:Executive state. I have tried to kill this thread using Process Explorer but again no joy. We are using Windows Server 2008 R2 64-Bit. The server is brand new and windows is freshly installed. Now heres the thing. We have brought two identical servers from Dell with the same specs and the same OS installed and I can not replicate this issue on the other server. Only on this server, under certain circumstances does this server process hang and can not be restarted! I have also changed the compatability mode by setting it the process to "Windows 2003" but this has not helped. I have noticed in Process Explorer that DEP is turned on but im not sure this has got any bearing on the issue ot not. Please, can someone help??

    Read the article

  • Suspicious process running under user named

    - by Amit
    I get a lot of emails reporting this and I want this issue to auto correct itself. These process are run by my server and are a result of updates, session deletion and other legitimate session handling reported as false positives. Here's a sample report: Time: Sat Oct 20 00:00:03 2012 -0400 PID: 20077 Account: named Uptime: 326117 seconds Executable: /usr/sbin/nsd\00507d27e9\0053\00\00\00\00\00 (deleted) The file system shows this process is running an executable file that has been deleted. This typically happens when the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. See csf.conf and the PT_DELETED text for more information about the security implications of processes running deleted executable files. Command Line (often faked in exploits): /usr/sbin/nsd -c /etc/nsd/nsd.conf Network connections by the process (if any): udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 udp: 127.0.0.1:53 -> 0.0.0.0:0 udp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 tcp: 127.0.0.1:53 -> 0.0.0.0:0 tcp: xx.xx.xxx.xx:53 -> 0.0.0.0:0 Files open by the process (if any): /dev/null /dev/null /dev/null Memory maps by the process (if any): 0045e000-00479000 r-xp 00000000 fd:00 2582025 /lib/ld-2.5.so 00479000-0047a000 r--p 0001a000 fd:00 2582025 /lib/ld-2.5.so 0047a000-0047b000 rw-p 0001b000 fd:00 2582025 /lib/ld-2.5.so 0047d000-005d5000 r-xp 00000000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d5000-005d7000 r--p 00157000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d7000-005d8000 rw-p 00159000 fd:00 2582073 /lib/i686/nosegneg/libc-2.5.so 005d8000-005db000 rw-p 005d8000 00:00 0 005dd000-005e0000 r-xp 00000000 fd:00 2582087 /lib/libdl-2.5.so 005e0000-005e1000 r--p 00002000 fd:00 2582087 /lib/libdl-2.5.so 005e1000-005e2000 rw-p 00003000 fd:00 2582087 /lib/libdl-2.5.so 0062b000-0063d000 r-xp 00000000 fd:00 2582079 /lib/libz.so.1.2.3 0063d000-0063e000 rw-p 00011000 fd:00 2582079 /lib/libz.so.1.2.3 00855000-0085f000 r-xp 00000000 fd:00 2582022 /lib/libnss_files-2.5.so 0085f000-00860000 r--p 00009000 fd:00 2582022 /lib/libnss_files-2.5.so 00860000-00861000 rw-p 0000a000 fd:00 2582022 /lib/libnss_files-2.5.so 00ac0000-00bea000 r-xp 00000000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bea000-00bfe000 rw-p 00129000 fd:00 2582166 /lib/libcrypto.so.0.9.8e 00bfe000-00c01000 rw-p 00bfe000 00:00 0 00e68000-00e69000 r-xp 00e68000 00:00 0 [vdso] 08048000-08074000 r-xp 00000000 fd:00 927261 /usr/sbin/nsd 08074000-08079000 rw-p 0002b000 fd:00 927261 /usr/sbin/nsd 08079000-0808c000 rw-p 08079000 00:00 0 08a20000-08a67000 rw-p 08a20000 00:00 0 b7f8d000-b7ff2000 rw-p b7f8d000 00:00 0 b7ffd000-b7ffe000 rw-p b7ffd000 00:00 0 bfa6d000-bfa91000 rw-p bffda000 00:00 0 [stack] Would /etc/nsd/restart or kill -1 20077 solve the problem?

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >