Search Results

Search found 21309 results on 853 pages for 'electronic leasing process'.

Page 87/853 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Platform for Efficiency: Boeing Defense, Space & Security integrates supply chain processes using Oracle Business Process Management solutions. by Fred Sandsmark

    - by JuergenKress
    Like most companies, aerospace giant Boeing has its jargon - words and phrases that uniquely define its products and processes. Take the word platform. It is used at Boeing to mean a family of aircraft - the F/A-18 fighter, for example, or the 777 jetliner. Boeing Defense, Space & Security since August 2009, employees in the Global Services & Support (GS&S) division of Boeing Defense, Space & Security have been talking about a different sort of platform: a supply chain technology platform, based on Oracle Business Process Management (Oracle BPM) solutions and Oracle SOA Suite. That platform, built with the assistance of Oracle Diamond Partner Capgemini, is serving as a jumping-off point for Boeing's GS&S staff to deploy radically improved business processes supported by Oracle Fusion Applications to build a high-visibility, end-to-end supply chain. This business process-driven technology platform has ambitious goals: to help GS&S respond more quickly and accurately to its customers' needs, to make business processes at all GS&S sites more consistent and less expensive, and to create a foundation for further improvement and efficiency. Read the full article here. Want to publish your BPM11g success story - request for a partner/customer reference? BPM Center of Excellent & First 100 Days of BPM documents to our SOA Community Workspace MWD_bpm_si_Centre_of_Excellence_0811.pdf First 100 Days of BPM whitepaper.pdf Please visit our SOA Community Workspace (SOA Community membership required). SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM reference,BPM Capgemini,BPM first 100 days,BPM center of Excellence,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • My hosting server giving memory allocation error

    - by Usman
    I have hosted my website on shared hosting linux server. As there are atmost 10-15 visitors come to my website daily but My wordpress website most of the times gives 500 Internal Server Error. I accessed my server Error Log following error is showing: [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:45 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:43 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:42 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:41 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:40 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:32 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:31 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:29 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php [Tue Dec 04 08:57:26 2012] [error] [client 117.205.74.227] (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/grasphub/public_html/index.php Is my hosting service really bad. And any solution. Thanks in advance.

    Read the article

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • How do you deal with UAC when creating a process as a different user?

    - by sysrpl
    I am having an issue with UAC and executing a non interactive process as a different user (APIs such as CreateProcessAsUser or CreateProcessWithLogonW). My program is intended to do the following: 1) Create a new windows user account (check, works correctly) 2) Create a non interactive child process as new user account (fails when UAC is enabled) My application includes a administrator manifest, and elevates correct when UAC is enabled in order to complete step 1. But step 2 is failing to execute correctly. I suspect this is because the child process which executes as another user is not inheriting the elevated rights of my main process (which executes as the interactive user). I would like to know how to resolve this issue. When UAC is off my program works correctly. How can I deal with UAC or required elevated rights in this situation? If it helps any, the child process needs to run as another user in order to setup file encryption for the new user account.

    Read the article

  • C++ Compile problem when using Windows - CodeGear

    - by Carlos
    This is a follow-up question to this one i made earlier. Btw thanks Neil Butterworth for you help http://stackoverflow.com/questions/2461977/problem-compiling-c-in-codegear A quick recap. Im currently developing a C++ program for university, I used Netbeans 6.8 on my personal computer (Mac) and all works perfect. When I try them on my windows partition or at the university PC's using CodeGear RAD Studio 2009 & 2010 i was getting a few compile errors which were solved by adding the following header file: #include <string> However now the program does compile but it doesn't run, just a blank console. And am getting the following in the CodeGear event's log: Thread Start: Thread ID: 2024. Process Project1.exe (3280) Process Start: C:\Users\Carlos\Documents\RAD Studio\Projects\Debug\Project1.exe. Base Address: $00400000. Process Project1.exe (3280) Module Load: Project1.exe. Has Debug Info. Base Address: $00400000. Process Project1.exe (3280) Module Load: ntdll.dll. No Debug Info. Base Address: $77E80000. Process Project1.exe (3280) Module Load: KERNEL32.dll. No Debug Info. Base Address: $771C0000. Process Project1.exe (3280) Module Load: KERNELBASE.dll. No Debug Info. Base Address: $75FE0000. Process Project1.exe (3280) Module Load: cc32100.dll. No Debug Info. Base Address: $32A00000. Process Project1.exe (3280) Module Load: USER32.dll. No Debug Info. Base Address: $77980000. Process Project1.exe (3280) Module Load: GDI32.dll. No Debug Info. Base Address: $75F50000. Process Project1.exe (3280) Module Load: LPK.dll. No Debug Info. Base Address: $75AB0000. Process Project1.exe (3280) Module Load: USP10.dll. No Debug Info. Base Address: $76030000. Process Project1.exe (3280) Module Load: msvcrt.dll. No Debug Info. Base Address: $776A0000. Process Project1.exe (3280) Module Load: ADVAPI32.dll. No Debug Info. Base Address: $777D0000. Process Project1.exe (3280) Module Load: SECHOST.dll. No Debug Info. Base Address: $77960000. Process Project1.exe (3280) Module Load: RPCRT4.dll. No Debug Info. Base Address: $762F0000. Process Project1.exe (3280) Module Load: SspiCli.dll. No Debug Info. Base Address: $759F0000. Process Project1.exe (3280) Module Load: CRYPTBASE.dll. No Debug Info. Base Address: $759E0000. Process Project1.exe (3280) Module Load: IMM32.dll. No Debug Info. Base Address: $763F0000. Process Project1.exe (3280) Module Load: MSCTF.dll. No Debug Info. Base Address: $75AD0000. Process Project1.exe (3280) I would really appreciate any help or ideas on how to solve this problem. P.S: In the case anyone wonders why am I sticking with CodeGear is because is the IDE professors use to evaluate our assignments.

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • file_operations Question, how do i know if a process that opened a file for writing has decided to c

    - by djTeller
    Hi Kernel Gurus, I'm currently writing a simple "multicaster" module. Only one process can open a proc filesystem file for writing, and the rest can open it for reading. To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON. i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing. Currently in case someone is open for writing i save the current-pid of that process and when the .close callback is called I check if that process is the one I saved earlier. Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission... Thanks!

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Automating ftp command line application redirecting I/O in .Net

    - by SoMoS
    Hello, I'm trying to automate the ftp client that Windows includes redirecting the I/O from the process. What I'm doing is starting the process from my application and trying to read what the client prints on the screen and sending my commands to it. The problem is that I can not read almost any data sent by the ftp client. Some data is present but most data is not read. That's the code I have until now. Public Sub Start() process = New Diagnostics.Process() process.StartInfo.FileName = "ftp.exe" #'' The command is on the path process.StartInfo.CreateNoWindow = True process.StartInfo.RedirectStandardInput = True process.StartInfo.RedirectStandardOutput = True process.StartInfo.UseShellExecute = False process.Start() process.StandardInput.AutoFlush = True process.BeginOutputReadLine() End Sub #'' takes data from the stdout Private Sub process_OutputDataReceived(ByVal sender As Object, ByVal e As System.Diagnostics.DataReceivedEventArgs) Handles process.OutputDataReceived #'' At this moment here there is code to show the stdout at a textbox End Sub #'' sends data to stdin Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click process.StandardInput.WriteLine(Me.TextEdit1.Text) End Sub Now when I execute this for example and send ? I just get the first line (and I should get a lot more). Or when I send the open command I should receive an A but nothing is received. Any ideas? Another question is ... when a console applications writes on the screen it always does that by writing at the stdout or the stderr isn't it?

    Read the article

  • Bash Array Problem

    - by Deepak Prasanna
    I wrote a bash script which tries to find a process and run the process if it had stopped. This is the script. #!/bin/bash process=thin path=/home/deepak/abc/ initiate=thin start -d process_id=`ps -ef | pgrep $process | wc -m` if [ "$process_id" -gt "0" ]; then echo "The process process is running!!" else cd $path $initiate echo "Oops the process has stopped" fi This worked fine and I thought of using arrays so that i can form a loop use this script to check multiple processes. So I modified my script like this #!/bin/bash process[1]=thin path[1]=/home/deepak/abc/ initiate[1]=thin start -d process_id=`ps -ef | pgrep $process[1] | wc -m` if [ "$process_id" -gt "0" ]; then echo "Hurray the process ${process[1]} is running!!" else cd ${path[1]} ${initiate[1]} echo "Oops the process has stopped" echo "Continue your coffee, the process has been stated again! ;)" fi I get this error if i run this script. DontWorry.sh: 2: process[1]=thin: not found DontWorry.sh: 3: path[1]=/home/deepak/abc/: not found DontWorry.sh: 4: initiate[1]=thin start -d: not found I googled to find any solution for this, most them insisted to use "#!/bin/bash" instead of "#!/bin/sh". I tried both but nothing worked. What am i missing?

    Read the article

  • "RewriteBase: argument is not a valid URL" error

    - by user305434
    hi, I'm trying to configure .htaccess of my website. http://213.175.210.49/~incisozl/ is the temporary url to the root(~/public_html/). when I try to rewrite the url at .htaccess i get an /home/incisozl/public_html/.htaccess: RewriteBase: argument is not a valid URL, referer: ht tp://213.175.210.49/~incisozl/inci-sozluk/somestring error. my rewrite rule is; RewriteEngine On RewriteBase / RewriteRule ^/?$ /index.php [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk/([^.\?/]+)?$ /seo.php?process=word&q=$1 [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluktest/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordtest&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-ileti/([0-9]+)/?$ /seo.php?process=eid&eid=$1 [L] RewriteRule ^inci-sozluk-ileticvp/([0-9]+)/?$ /seo.php?process=cvpeid&eid=$1 [L] btw. it works fine when i use it with www.incisozluk.org pointed domain

    Read the article

  • Python subprocess Popen.communicate() equivalent to Popen.stdout.read()?

    - by Christophe
    Very specific question (I hope): What are the differences between the following three codes? (I expect it to be only that the first does not wait for the child process to be finished, while the second and third ones do. But I need to be sure this is the only difference...) I also welcome other remarks/suggestions (though I'm already well aware of the shell=True dangers and cross-platform limitations) Note that I already read Python subprocess interaction, why does my process work with Popen.communicate, but not Popen.stdout.read()? and that I do not want/need to interact with the program after. Also note that I already read Alternatives to Python Popen.communicate() memory limitations? but that I didn't really get it... First code: from subprocess import Popen, PIPE def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Second code: from subprocess import Popen, PIPE from subprocess import communicate def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) (stdout, stderr) = process.communicate() return process, stderr, stdout Third code: from subprocess import Popen, PIPE from subprocess import wait def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) code = process.wait() stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Thanks.

    Read the article

  • How to optimize an asp.net spawning a new process for each request ?

    - by Recycle Bin
    I have an asp.net mvc application that spawns a Process as follows: Process p = new Process(); p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.StartInfo.Arguments = "-interaction=nonstopmode " + inputpath; p.StartInfo.WorkingDirectory = dir; p.StartInfo.UseShellExecute = false; p.StartInfo.FileName = "pdflatex.exe"; p.StartInfo.LoadUserProfile = true; p.Start(); p.WaitForExit(); Before going further, I need to know whether, e.g., pdflatex.exe is a managed code or a native code? Edit I need to consider this because: (Hopely I am not wrong...) Each Asp.net application runs in an separate/isolated AppDomain as opposed to a separate/isolated process. A native executable cannot live in an AppDomain. to be continued... Shortly speaking, I hope my site does not spawn a new process for each request. Because a process is more expensive than an application domain.

    Read the article

  • How to process payments for a software (activation code)?

    - by jsoldi
    I want to sell software online and I need an easy to implement payment processing system. What I'm actually going to be selling is an activation code (one per purchase) that would activate the trial version of a product. I was about to use this one but I just found out that people without a paid email account (not hotmail or yahoo) can't process their orders, which I'm sure would discourage many, if not most, of the possible buyers.

    Read the article

  • Update in Certification Exam Score Report Access Process Now Live!

    - by Cinzia Mascanzoni
    Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. Learn more here.

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >