Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 6/973 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Start a thread in a different process in Java

    - by kolm
    Hi there, is it possible to start a new thread in a different process in Java? I mean, I'm running a specific process and main thread, issuing ProcessBuilder for creating a new process. Before start() method is invoked, one must provide the command to be run in another process. Is it possible to start a new thread in newly created process? Thank you in advance for the reply. Best regards.

    Read the article

  • change process windows style

    - by Suriyan Suresh
    I start IE as a process and then i would like to change the following properties of a process. remove title bar, toolbar of a process (if IE) set top,left location and size through c# prevent process from minimizing , i have used the following code but had no luck(find the handle of the process and then pass it to below function) public void SetFormOnDesktop(int hwnd) { int hwndf = hwnd; IntPtr hwndParent = FindWindow("ProgMan", null); SetParent(hwndf, hwndParent); }

    Read the article

  • How to get an instance of instance System.Diagnostics.Process by processID on a remote machine

    - by Tomas1
    Hi all, I want to run & control a process remotely, I found the best way is the WMI APIs, The WMI gives me information about the remote process but I need more control like waiting it and getting the standard output and errors, how can I do that, and can I get an instance of System.Diagnostics.Process class by instance ID remotely? note: I tried to get an instance of the Process by Process.GetProcessByPID and passign machineName parameter, but and Exception has thrown. Thanks in advance.

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • How to stop a process in Terminal [closed]

    - by AngryHacker
    Possible Duplicate: Ending a process in unix instead of interrupting it When I task in Terminal, such as ping blah.com, how do I then stop this task (other than closing the Terminal window. In Windows, you can Ctrl+Break pretty much any terminal based process, but I can't figure out the way to do it on the Mac.

    Read the article

  • .NET4: In-Process Side-by-Side Execution Explained

    - by emptyset
    Overview: I'm interested in learning more about the .NET4 "In-Process Side-by-Side Execution" of assemblies, and need additional information to help me demystify it. Motivation: The application in question is built against .NET2, and uses two third-party libraries that also work against .NET2. The application is deployed (via file copy) to client machines in a virtual environment that includes .NET2. Not my architecture, please bear with me. Goal: To see if it's possible to rebuild the application assemblies (or a subset) against .NET4, and ship the application as before, without changing the third-party libraries and including the .NET4 Client Profile (as described here) in the deployment. Steps Taken: The following articles were read, but didn't quite provide me enough information: In-Process Side-by-Side Execution: Browsed this article, and Scenario Two is the closest it comes to describing something that resembles my situation, but doesn't really cover it with any depth. ASP.NET Side-by-Side Execution Overview: This article covers a web application, but I'm dealing with a client WinForms application. CLR Team Blog: In-Process Side-by-Side: This is useful to explain how plug-ins to host processes function under .NET4, but I don't know if this applies to the third-party libraries. Further Steps: I'm also unclear on how to proceed upgrading a single .NET2 assembly to .NET4, with the overall application remaining in .NET2 (i.e. how to configure the solution/project files, if any special code needs to be included, etc.).

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Get process id of process started with CreateObject in .NET

    - by Lex
    Hi! I'm using VB.NET for a web-application that starts some process using CreateObject, like this: Dim AVObject = CreateObject("avwin.application") After all is done everything get closed down en stopped using the proper release functions, however, for some reason the process remains. Is there some way in which I can get the process id of the started process, in order to explicitly kill it just before termination? Thanks

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • C# process restart loop

    - by Andrej
    Hi, I'm trying to make a console app that would monitor some process and restart it if it exits. So, the console app is always on, it's only job is to restart some other process. I posted my code below.. it basically works but just for one process restart... I would appriciate any help!! Thanks in advance! { System.Diagnostics.Process[] p = System.Diagnostics.Process.GetProcessesByName(SOME_PROCESS); p[0].Exited += new EventHandler(Startup_Exited); while (!p[0].HasExited) { p[0].WaitForExit(); } //Application.Run(); } private static void Startup_Exited(object sender, EventArgs e) { System.Diagnostics.Process.Start(AGAIN_THAT_SAME_PROCESS); }

    Read the article

  • C# - Screenshot of process under Windows Service

    - by Jonathan.Peppers
    We have to run a process from a windows service and get a screenshot from it. We tried the BitBlt and PrintWindow Win32 calls, but both give blank (black) bitmaps. If we run our code from a normal user process, it works just fine. Is this something that is even possible? Or could there be another method to try? Things we tried: Windows service running as Local System, runs process as Local System - screenshot fails Windows service running as Administrator, runs process as Administrator - screenshot fails. Windows application running as user XYZ, runs a process as XYZ - screenshot works with both BitBlt or PrintWindow. Tried checking "Allow service to interact with desktop" from Local System We also noticed that PrintWindow works better for our case, it works if the window is behind another window. For other requirements, both the parent and child processes must be under the same user. We can't really use impersonation from one process to another.

    Read the article

  • What is application and process?

    - by Lu Lu
    An application consists of one or more processes. A process, in the simplest terms, is an executing program. One or more threads run in the context of the process. A thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including parts currently being executed by another thread. Source: http://msdn.microsoft.com/en-us/library/ms684841%28VS.85%29.aspx I understand about thread, but I can't distinguish between application & process. What is application? What is process? How do an application have more than 1 process? And please give me an example in C#. Thanks.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >