Search Results

Search found 23901 results on 957 pages for 'deployment process'.

Page 132/957 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Exchange 2010 Deployment Notes &ndash; iPhone and Exchange ActiveSync issue

    - by BWCA
    After we moved one of our user mailboxes from Exchange 2003 to 2010, the user started getting a Cannot get mail. The connection to the server failed error message on their iPhone device. There are a lot of references on Google to check for inherited permissions to resolve the error message.  We quickly determined that we were not dealing with a permissions issue. After some additional troubleshooting and research, we were able to isolate the problem to a device partnership issue. To resolve the issue, use ADSI Edit to find the user object. When you find the user object, double-click on it and you should see a CN=ExchangeActiveSyncDevices container under the user object as shown below.  On the right-hand side, you should see one or more device partnerships.   Right-click the device partnership according to the device the user is using, and click Delete. After you remove the device partnership, please wait until Active Directory replication completes before you set up the device again.

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • What is juju doing when my deployment is "pending"? It seems to take awhile without much happening

    - by mfisch
    After deploying a charm, either locally or not, juju status lists "Pending". It seems to sit in this state for awhile, longer in my experience in the cloud, a bit shorter locally. What is juju doing during this time? For local instances it's a couple minutes or less, longer with cloud instances, up to 10 minutes in some cases. I am just curious if deploy's stay pending when the VM is being setup or is something else going on?

    Read the article

  • How do you deal with UAC when creating a process as a different user?

    - by sysrpl
    I am having an issue with UAC and executing a non interactive process as a different user (APIs such as CreateProcessAsUser or CreateProcessWithLogonW). My program is intended to do the following: 1) Create a new windows user account (check, works correctly) 2) Create a non interactive child process as new user account (fails when UAC is enabled) My application includes a administrator manifest, and elevates correct when UAC is enabled in order to complete step 1. But step 2 is failing to execute correctly. I suspect this is because the child process which executes as another user is not inheriting the elevated rights of my main process (which executes as the interactive user). I would like to know how to resolve this issue. When UAC is off my program works correctly. How can I deal with UAC or required elevated rights in this situation? If it helps any, the child process needs to run as another user in order to setup file encryption for the new user account.

    Read the article

  • C++ Compile problem when using Windows - CodeGear

    - by Carlos
    This is a follow-up question to this one i made earlier. Btw thanks Neil Butterworth for you help http://stackoverflow.com/questions/2461977/problem-compiling-c-in-codegear A quick recap. Im currently developing a C++ program for university, I used Netbeans 6.8 on my personal computer (Mac) and all works perfect. When I try them on my windows partition or at the university PC's using CodeGear RAD Studio 2009 & 2010 i was getting a few compile errors which were solved by adding the following header file: #include <string> However now the program does compile but it doesn't run, just a blank console. And am getting the following in the CodeGear event's log: Thread Start: Thread ID: 2024. Process Project1.exe (3280) Process Start: C:\Users\Carlos\Documents\RAD Studio\Projects\Debug\Project1.exe. Base Address: $00400000. Process Project1.exe (3280) Module Load: Project1.exe. Has Debug Info. Base Address: $00400000. Process Project1.exe (3280) Module Load: ntdll.dll. No Debug Info. Base Address: $77E80000. Process Project1.exe (3280) Module Load: KERNEL32.dll. No Debug Info. Base Address: $771C0000. Process Project1.exe (3280) Module Load: KERNELBASE.dll. No Debug Info. Base Address: $75FE0000. Process Project1.exe (3280) Module Load: cc32100.dll. No Debug Info. Base Address: $32A00000. Process Project1.exe (3280) Module Load: USER32.dll. No Debug Info. Base Address: $77980000. Process Project1.exe (3280) Module Load: GDI32.dll. No Debug Info. Base Address: $75F50000. Process Project1.exe (3280) Module Load: LPK.dll. No Debug Info. Base Address: $75AB0000. Process Project1.exe (3280) Module Load: USP10.dll. No Debug Info. Base Address: $76030000. Process Project1.exe (3280) Module Load: msvcrt.dll. No Debug Info. Base Address: $776A0000. Process Project1.exe (3280) Module Load: ADVAPI32.dll. No Debug Info. Base Address: $777D0000. Process Project1.exe (3280) Module Load: SECHOST.dll. No Debug Info. Base Address: $77960000. Process Project1.exe (3280) Module Load: RPCRT4.dll. No Debug Info. Base Address: $762F0000. Process Project1.exe (3280) Module Load: SspiCli.dll. No Debug Info. Base Address: $759F0000. Process Project1.exe (3280) Module Load: CRYPTBASE.dll. No Debug Info. Base Address: $759E0000. Process Project1.exe (3280) Module Load: IMM32.dll. No Debug Info. Base Address: $763F0000. Process Project1.exe (3280) Module Load: MSCTF.dll. No Debug Info. Base Address: $75AD0000. Process Project1.exe (3280) I would really appreciate any help or ideas on how to solve this problem. P.S: In the case anyone wonders why am I sticking with CodeGear is because is the IDE professors use to evaluate our assignments.

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • file_operations Question, how do i know if a process that opened a file for writing has decided to c

    - by djTeller
    Hi Kernel Gurus, I'm currently writing a simple "multicaster" module. Only one process can open a proc filesystem file for writing, and the rest can open it for reading. To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON. i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing. Currently in case someone is open for writing i save the current-pid of that process and when the .close callback is called I check if that process is the one I saved earlier. Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission... Thanks!

    Read the article

  • How to configure properly IntelliJ IDEA for deployment of JBoss Seam project?

    - by Piotr Kochanski
    I would like to use IntelliJ IDEA for development of JBoss Seam project. seam-gen is creating the project stub, however the stub is not complete. In particular it is not clear how to deploy such project. First of all I had to define manually web project facelet and add libraries to its deployment definition. The other problem was persistence.xml file. In the Seam generated project it does not exists, since Ant is using one of the persistence-dev.xml, persistence-prod.xml, persistence-test.xml files, changing its name, depending on deployment type (which is ok). Obviously I can create persistence.xml by hand, but it goes againts Seam way of development. Finally I decided to use directly ant, which is not partucularly comfortable. All these tweaks made me think that I am doing something wrong from the IntelliJ IDEA point of view. What is the efficient way of configuring IntelliJ for usage with JBoss Seam (deployment, in particular)? I am using JBoss Seam 2.1.1, Intellij 8.1.4, JBoss 4.3.3

    Read the article

  • How to do a javascript redirection to a ClickOnce deployment URL?

    - by jerem
    I have a ClickOnce application used to view some documents on a website. When connected, the user sees a list of documents as links to http://server/myapp.application?document=docname. It worked fine until I had to integrate the website authentication/security system into my application. The website uses a ticketing system to grant access to its users. A ticket is generated by a web application and needs to be added to the deployment URL querystring, then I have to check at application startup that the ticket given in querystring was right by making another request to the web application. So the deployment URL becomes something like: h ttp://server/myapp.application?document=docname&ticket=ticketnumber. The problem is the ticket is valid only 10 seconds, so I have to get it only after the user has clicked a link. My first try was to have some javascript do the request to get the ticket, generate the proper deployment URL and then redirect the user to this URL with "window.location = deploymentUrl;". It works fine in Firefox, but IE does not prompt the user for installation. I guess it is a ClickOnce security constraints, but I am able to do the redirection when doing it on localhost, so I hope there is a workaround. I have also added the server on the "trusted sites" list in IE options. Is it possible to have that working in IE? What are my other options to do that?

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Automating ftp command line application redirecting I/O in .Net

    - by SoMoS
    Hello, I'm trying to automate the ftp client that Windows includes redirecting the I/O from the process. What I'm doing is starting the process from my application and trying to read what the client prints on the screen and sending my commands to it. The problem is that I can not read almost any data sent by the ftp client. Some data is present but most data is not read. That's the code I have until now. Public Sub Start() process = New Diagnostics.Process() process.StartInfo.FileName = "ftp.exe" #'' The command is on the path process.StartInfo.CreateNoWindow = True process.StartInfo.RedirectStandardInput = True process.StartInfo.RedirectStandardOutput = True process.StartInfo.UseShellExecute = False process.Start() process.StandardInput.AutoFlush = True process.BeginOutputReadLine() End Sub #'' takes data from the stdout Private Sub process_OutputDataReceived(ByVal sender As Object, ByVal e As System.Diagnostics.DataReceivedEventArgs) Handles process.OutputDataReceived #'' At this moment here there is code to show the stdout at a textbox End Sub #'' sends data to stdin Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click process.StandardInput.WriteLine(Me.TextEdit1.Text) End Sub Now when I execute this for example and send ? I just get the first line (and I should get a lot more). Or when I send the open command I should receive an A but nothing is received. Any ideas? Another question is ... when a console applications writes on the screen it always does that by writing at the stdout or the stderr isn't it?

    Read the article

  • Bash Array Problem

    - by Deepak Prasanna
    I wrote a bash script which tries to find a process and run the process if it had stopped. This is the script. #!/bin/bash process=thin path=/home/deepak/abc/ initiate=thin start -d process_id=`ps -ef | pgrep $process | wc -m` if [ "$process_id" -gt "0" ]; then echo "The process process is running!!" else cd $path $initiate echo "Oops the process has stopped" fi This worked fine and I thought of using arrays so that i can form a loop use this script to check multiple processes. So I modified my script like this #!/bin/bash process[1]=thin path[1]=/home/deepak/abc/ initiate[1]=thin start -d process_id=`ps -ef | pgrep $process[1] | wc -m` if [ "$process_id" -gt "0" ]; then echo "Hurray the process ${process[1]} is running!!" else cd ${path[1]} ${initiate[1]} echo "Oops the process has stopped" echo "Continue your coffee, the process has been stated again! ;)" fi I get this error if i run this script. DontWorry.sh: 2: process[1]=thin: not found DontWorry.sh: 3: path[1]=/home/deepak/abc/: not found DontWorry.sh: 4: initiate[1]=thin start -d: not found I googled to find any solution for this, most them insisted to use "#!/bin/bash" instead of "#!/bin/sh". I tried both but nothing worked. What am i missing?

    Read the article

  • "RewriteBase: argument is not a valid URL" error

    - by user305434
    hi, I'm trying to configure .htaccess of my website. http://213.175.210.49/~incisozl/ is the temporary url to the root(~/public_html/). when I try to rewrite the url at .htaccess i get an /home/incisozl/public_html/.htaccess: RewriteBase: argument is not a valid URL, referer: ht tp://213.175.210.49/~incisozl/inci-sozluk/somestring error. my rewrite rule is; RewriteEngine On RewriteBase / RewriteRule ^/?$ /index.php [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk/([^.\?/]+)?$ /seo.php?process=word&q=$1 [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluktest/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordtest&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-ileti/([0-9]+)/?$ /seo.php?process=eid&eid=$1 [L] RewriteRule ^inci-sozluk-ileticvp/([0-9]+)/?$ /seo.php?process=cvpeid&eid=$1 [L] btw. it works fine when i use it with www.incisozluk.org pointed domain

    Read the article

  • Python subprocess Popen.communicate() equivalent to Popen.stdout.read()?

    - by Christophe
    Very specific question (I hope): What are the differences between the following three codes? (I expect it to be only that the first does not wait for the child process to be finished, while the second and third ones do. But I need to be sure this is the only difference...) I also welcome other remarks/suggestions (though I'm already well aware of the shell=True dangers and cross-platform limitations) Note that I already read Python subprocess interaction, why does my process work with Popen.communicate, but not Popen.stdout.read()? and that I do not want/need to interact with the program after. Also note that I already read Alternatives to Python Popen.communicate() memory limitations? but that I didn't really get it... First code: from subprocess import Popen, PIPE def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Second code: from subprocess import Popen, PIPE from subprocess import communicate def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) (stdout, stderr) = process.communicate() return process, stderr, stdout Third code: from subprocess import Popen, PIPE from subprocess import wait def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) code = process.wait() stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Thanks.

    Read the article

  • How to optimize an asp.net spawning a new process for each request ?

    - by Recycle Bin
    I have an asp.net mvc application that spawns a Process as follows: Process p = new Process(); p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.StartInfo.Arguments = "-interaction=nonstopmode " + inputpath; p.StartInfo.WorkingDirectory = dir; p.StartInfo.UseShellExecute = false; p.StartInfo.FileName = "pdflatex.exe"; p.StartInfo.LoadUserProfile = true; p.Start(); p.WaitForExit(); Before going further, I need to know whether, e.g., pdflatex.exe is a managed code or a native code? Edit I need to consider this because: (Hopely I am not wrong...) Each Asp.net application runs in an separate/isolated AppDomain as opposed to a separate/isolated process. A native executable cannot live in an AppDomain. to be continued... Shortly speaking, I hope my site does not spawn a new process for each request. Because a process is more expensive than an application domain.

    Read the article

  • Deployable dependencies in Visual Studio 2010 SP1 Beta

    - by DigiMortal
    One new feature that comes with Visual Studio 2010 SP1 Beta is support for deployment references. Deployment reference means that you can include all necessary DLL-s to deployment package so your application has all assemblies it needs to run with it in deployment package. In this posting I will show you how to use deployment dependencies. When I open my ASP.NET web application I have new option for references when I right-click on my web project: Add Deployable Dependencies… If you select it you will see dialog where you can select dependencies you want to add to your project package. When packages you need are selected click OK. Visual Studio adds new folder to your project called _bin_DeployableAssemblies. Screenshot on right shows the list of assemblies added for ASP.NET Pages and Razor. All DLL-s required to run ASP.NET MVC 3 with Razor view engine are here. I am not sure if NuGet.Core.dll is required in production but if it is added then let it be there. Deploy to Azure I tried to deploy my ASP.NET MVC project that uses Razor to Windows Azure after adding deployable references to my project. Deployment went fine and web role instance started without any problems. The only DLL reference I made as local was the one for System.Web.Mvc. All Razor stuff came with deployable dependencies. Conclusion Visual Studio support for deployable dependencies is great because this way component providers can build definitions for their components so also assemblies that are loaded dynamically at runtime will be in deployment package.

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • How to process payments for a software (activation code)?

    - by jsoldi
    I want to sell software online and I need an easy to implement payment processing system. What I'm actually going to be selling is an activation code (one per purchase) that would activate the trial version of a product. I was about to use this one but I just found out that people without a paid email account (not hotmail or yahoo) can't process their orders, which I'm sure would discourage many, if not most, of the possible buyers.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >