Search Results

Search found 24857 results on 995 pages for 'child process'.

Page 7/995 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to get an instance of instance System.Diagnostics.Process by processID on a remote machine

    - by Tomas1
    Hi all, I want to run & control a process remotely, I found the best way is the WMI APIs, The WMI gives me information about the remote process but I need more control like waiting it and getting the standard output and errors, how can I do that, and can I get an instance of System.Diagnostics.Process class by instance ID remotely? note: I tried to get an instance of the Process by Process.GetProcessByPID and passign machineName parameter, but and Exception has thrown. Thanks in advance.

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Element is already the child of another element.

    - by Erica
    I get the folowing error in my Silverlight application. But i cant figure out what control it is that is the problem. If i debug it don't break on anything in the code, it just fails in this framework callstack with only framework code. Is there any way to get more information on what part of a Silverlight app that is the problem in this case. Message: Sys.InvalidOperationException: ManagedRuntimeError error #4004 in control 'Xaml1': System.InvalidOperationException: Element is already the child of another element. at MS.Internal.XcpImports.CheckHResult(UInt32 hr) at MS.Internal.XcpImports.Collection_AddValue[T](PresentationFrameworkCollection1 collection, CValue value) at MS.Internal.XcpImports.Collection_AddDependencyObject[T](PresentationFrameworkCollection1 collection, DependencyObject value) at System.Windows.PresentationFrameworkCollection1.AddDependencyObject(DependencyObject value) at System.Windows.Controls.UIElementCollection.AddInternal(UIElement value) at System.Windows.PresentationFrameworkCollection1.Add(T value) at System.Windows.Controls.AutoCompleteBox.OnApplyTemplate() at System.Windows.FrameworkElement.OnApplyTemplate(IntPtr nativeTarget)

    Read the article

  • Exclude first child with XSL-T

    - by zneak
    Hello, What I'm trying to do is fairly simple, but I can't find the way to. I just want to iterate over the children of a node excluding the first child. For instance, in this XML snippet, I would want all the <bar> elements, except the first one: <foo> <Bar>Example</Bar> <Bar>This is an example</Bar> <Bar>Another example</Bar> <Bar>Bar</Bar> </foo> There is no common attribute by which I can filter (like an id tag or something similar). Any suggestions?

    Read the article

  • using last-child in css

    - by Miral
    the below css doesnt works on below given html. The purpose is to apply the css on the last 'li', but it doesnt. #refundReasonMenu #nav li:last-child { border-bottom: 1px solid #b5b5b5; } and html looks like <div id="refundReasonMenu"> <ul id="nav"> <li><a id="abc" href="#">abcde</a></li> <li><a id="def" href="#">xyz</a></li> </ul> </div>

    Read the article

  • Query Only Specified Number Of Items From Parent/Child Categories

    - by RogeR
    I'm having trouble figureing out how to query every item in a certain category and only list the newest 10 items by date. Here is my table layout: download_categories category_id (int) primary key title (var_char) parent_id (int) downloads id (int) primary key title (var_char) category_id (int) date (date) I need to query every file in a main category that lets say has 100 items and 5 child categories and only spit out the last 10 added. I have functions right now that just add up all the files so I can get a count by category, but I can't seem to modify the code to only display a certain amount of items based on the date.

    Read the article

  • Parent Child Relationships with Fluent NHibernate?

    - by ElHaix
    I would like to create a cascading tree/list of N number of children for a given parent, where a child can also become a parent. Given the following data structure: CountryType=1; ColorType=3; StateType=5 6,7,8 = {Can, US, Mex} 10, 11, 12 = {Red, White, Blue} 20,21,22= {California, Florida, Alberta} TreeID ListTypeID ParentTreeID ListItemID 1 1 Null 6 (Canada is a Country) 2 1 Null 7 (US is a Country) 3 1 Null 8 (Mexico is a Country) 4 3 3 10 (Mexico has Red) 5 3 3 11 (Mexico has White) 6 5 1 22 (Alberta is in Canada) 7 5 7 20 (California is in US) 8 5 7 21 (Florida is in US) 9 3 6 10 (Alberta is Red) 10 3 6 12 (Alberta is Blue) 11 3 2 10 (US is Red) 12 3 2 11 (Us is Blue) How would this be represented in Fluent NHibernate classes? Some direction would be appreciated. Thanks.

    Read the article

  • Representing parent-child relationships in SharePoint lists

    - by Chris Farmer
    I need to create some functionality in our SharePoint app that populates a list or lists with some simple hierarchical data. Each parent record will represent a "submission" and each child record will be a "submission item." There's a 1-to-n relationship between submissions and submission items. Is this practical to do in SharePoint? The only types of list relationships I've done so far are lookup columns, but this seems a bit different. Also, once such a list relationship is established, then what's the best way to create views on this kind of data. I'm almost convinced that it'd be easier just to write this stuff to an external database, but I'd like to give SharePoint a shot in order to take advantage of the automated search capabilities.

    Read the article

  • jQuery child of clicked element

    - by Tom
    Hi, I've got a list of links which have a click event attached to them, I need to get the ID from the child A link. So in the example below if I clicked the first list element I'd need google retuned. I've tried '$this a' but can't quite work out the syntax. jQuery: $("ul li").click(function(event){ $("input").val($(this).html()); } ); html: <ul> <li><a href="http://www.google.com" id="google">Google</a> </ul>

    Read the article

  • css3 nth child question

    - by JCHASE11
    Hello. I am designing a 960px wide layout with an unordered list. Each list item is 240px wide, so 4 list items fit horizontally in a row. I have about 20 rows on the page.... I want every other list item to have a background of #ececec, so my css would be: ul li:nth-child(2n+2){ background-color:#ececec; } This works. The only problem is because there are 4 items in a row (an even #), the next row would be identical, thus rendering background colors on every 1st and 3rd list items in a row. This is not the effect I am looking to achieve. I want the background colors to alternate, creating a grid-like effect. What is the correct css to do so (think a checker board). Thanks!

    Read the article

  • How to stop a process in Terminal [closed]

    - by AngryHacker
    Possible Duplicate: Ending a process in unix instead of interrupting it When I task in Terminal, such as ping blah.com, how do I then stop this task (other than closing the Terminal window. In Windows, you can Ctrl+Break pretty much any terminal based process, but I can't figure out the way to do it on the Mac.

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • Export SWC from Flash and Access Child from Flex

    - by php html
    I'm creating an actionscript project in Flex Builder. I succeed to export from Flash a SWC file, and to use it succesfully in Flex. I have a good programming background and Flex looks very simple for me, but I have difficult times in flash. I'm trying to achieve something that might be very simple(not for me of course): I create a simple shape in Flash, convert it to symbol. Then I create a TextField. The I select both the elements and convert them to another symbol, and Export it as a movieclip in swc. In flex I want to change the value from the textfield. How should I do? I'm trying to do: var t:ExportedMC = new ExportedMC(); t....(what should I write here) As I mentioned when I open flash I feel like an elephant in a porcelain store. I have 2 questions here: - how to assign a name to the textfield in flash? I'm using CS4. - how to access it as a child in flex?

    Read the article

  • Adding a block of XML as child of a SimpleXMLElement object

    - by miCRoSCoPiC_eaRthLinG
    Hey all, I have this SimpleXMLElement object with a XML setup similar to the following... $xml <<< EOX <books> <book> <name>ABCD</name> </book> </books> EOX; $sx = new SimpleXMLElement( $xml ); Now I have a class named Book that contains info. about each book. The same class can also spit out the book info. in XML format akin the the above (the nested block).. example, $book = new Book( 'EFGH' ); $book->genXML(); ... will generate <book> <name>EFGH</name> </book> Now I'm trying to figure out a way by which I can use this generated XML block and append as a child of so that now it looks like... for example.. // Non-existent member method. For illustration purposes only. $sx->addXMLChild( $book->genXML() ); ...XML tree now looks like: <books> <book> <name>ABCD</name> </book> <book> <name>EFGH</name> </book> </books> From what documentation I have read on SimpleXMLElement, addChild() won't get this done for you as it doesn't support XML data as tag value. Any ideas on how I should go about this ? Thanks, m^e

    Read the article

  • divert asp.net child controls

    - by Eric
    Hi Folks, I am trying to create an asp.net custom control that acts as a hosting container for any other controls, similar to the existing ‘Panel’ control. Basically, I need to build a web control that groups a bunch of other controls. It will consist of a header and a body pane, similar to a normal window in desktop application. The header will contain some simple text and some JavaScript driven code that shows/hides the body pane. The body pane simply hosts any number of other controls. +------------------------------------------------------+ | User Details Show/Hide | +------------------------------------------------------+ | Name: [Eric ] | | Address: [Some where] | | Date of Birth: [01/01/1980] | | | | (any other fields goes on) | | | | | +------------------------------------------------------+ Ideally I want to create a control that packs the whole thing together, so at design time I could use the following markup. <myCtl:SuperContainer runat=”server” Title=”User Details”> <asp:label id=”lblName” runat=”server” text=”Name:”/> <asp:textbox id=”txtName” runat=”server”/> <asp:label id=”lblDOB” runat=”server” text=”Date of Birth:”/> <asp:textbox id=”txtDOB” runat=”server”/> (…other controls definition…) </myCtl:SuperContainer> I plan to include two panels in my control, one for the header and another one for the body, but as you can see, the key issue is to find a way to ‘divert’ the child controls that are defined in the markup to the body panel, instead of the default parent container. I feel it can be some how simply override (manipulate) the control property, but don’t know how to properly do so. Can any one give some idea about how to implement this ‘SuperContainer’ control? Many thank, Eric

    Read the article

  • System Idle Process network traffic?-Updated

    - by Moab
    I was using NetBalancer and noticed network traffic on an unidentified service, but when I highlight it and then go to the lower center pane and click the parent process it says it is the System Idle process, it is showing incoming and outgoing traffic in the upper pane, anyone know why this Windows System Idle Process is talking on the network? Windows 7 HP 64bit . . . Edit, after blocking the traffic for that unidentified Service I checked my event viewer (Windows LogsSystem) and found 3 new events that were never recorded before and matched the time I blocked the traffic. So is this part of the Windows local DNS cache? Event ID 1014 DNS Client Events Name resolution for the name dns.msftncsi.com timed out after none of the configured DNS servers responded. dns.msftncsi.com Name resolution for the name wpad.home timed out after none of the configured DNS servers responded. wpad Name resolution for the name mscrl.microsoft.com timed out after none of the configured DNS servers responded. mscrl.microsoft.com . Then My Web Browser refused to work, I re-enabled the traffic and all returned to normal. .

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Get process id of process started with CreateObject in .NET

    - by Lex
    Hi! I'm using VB.NET for a web-application that starts some process using CreateObject, like this: Dim AVObject = CreateObject("avwin.application") After all is done everything get closed down en stopped using the proper release functions, however, for some reason the process remains. Is there some way in which I can get the process id of the started process, in order to explicitly kill it just before termination? Thanks

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >