Search Results

Search found 21664 results on 867 pages for 'process innovation'.

Page 91/867 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • How do you deal with UAC when creating a process as a different user?

    - by sysrpl
    I am having an issue with UAC and executing a non interactive process as a different user (APIs such as CreateProcessAsUser or CreateProcessWithLogonW). My program is intended to do the following: 1) Create a new windows user account (check, works correctly) 2) Create a non interactive child process as new user account (fails when UAC is enabled) My application includes a administrator manifest, and elevates correct when UAC is enabled in order to complete step 1. But step 2 is failing to execute correctly. I suspect this is because the child process which executes as another user is not inheriting the elevated rights of my main process (which executes as the interactive user). I would like to know how to resolve this issue. When UAC is off my program works correctly. How can I deal with UAC or required elevated rights in this situation? If it helps any, the child process needs to run as another user in order to setup file encryption for the new user account.

    Read the article

  • C++ Compile problem when using Windows - CodeGear

    - by Carlos
    This is a follow-up question to this one i made earlier. Btw thanks Neil Butterworth for you help http://stackoverflow.com/questions/2461977/problem-compiling-c-in-codegear A quick recap. Im currently developing a C++ program for university, I used Netbeans 6.8 on my personal computer (Mac) and all works perfect. When I try them on my windows partition or at the university PC's using CodeGear RAD Studio 2009 & 2010 i was getting a few compile errors which were solved by adding the following header file: #include <string> However now the program does compile but it doesn't run, just a blank console. And am getting the following in the CodeGear event's log: Thread Start: Thread ID: 2024. Process Project1.exe (3280) Process Start: C:\Users\Carlos\Documents\RAD Studio\Projects\Debug\Project1.exe. Base Address: $00400000. Process Project1.exe (3280) Module Load: Project1.exe. Has Debug Info. Base Address: $00400000. Process Project1.exe (3280) Module Load: ntdll.dll. No Debug Info. Base Address: $77E80000. Process Project1.exe (3280) Module Load: KERNEL32.dll. No Debug Info. Base Address: $771C0000. Process Project1.exe (3280) Module Load: KERNELBASE.dll. No Debug Info. Base Address: $75FE0000. Process Project1.exe (3280) Module Load: cc32100.dll. No Debug Info. Base Address: $32A00000. Process Project1.exe (3280) Module Load: USER32.dll. No Debug Info. Base Address: $77980000. Process Project1.exe (3280) Module Load: GDI32.dll. No Debug Info. Base Address: $75F50000. Process Project1.exe (3280) Module Load: LPK.dll. No Debug Info. Base Address: $75AB0000. Process Project1.exe (3280) Module Load: USP10.dll. No Debug Info. Base Address: $76030000. Process Project1.exe (3280) Module Load: msvcrt.dll. No Debug Info. Base Address: $776A0000. Process Project1.exe (3280) Module Load: ADVAPI32.dll. No Debug Info. Base Address: $777D0000. Process Project1.exe (3280) Module Load: SECHOST.dll. No Debug Info. Base Address: $77960000. Process Project1.exe (3280) Module Load: RPCRT4.dll. No Debug Info. Base Address: $762F0000. Process Project1.exe (3280) Module Load: SspiCli.dll. No Debug Info. Base Address: $759F0000. Process Project1.exe (3280) Module Load: CRYPTBASE.dll. No Debug Info. Base Address: $759E0000. Process Project1.exe (3280) Module Load: IMM32.dll. No Debug Info. Base Address: $763F0000. Process Project1.exe (3280) Module Load: MSCTF.dll. No Debug Info. Base Address: $75AD0000. Process Project1.exe (3280) I would really appreciate any help or ideas on how to solve this problem. P.S: In the case anyone wonders why am I sticking with CodeGear is because is the IDE professors use to evaluate our assignments.

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • file_operations Question, how do i know if a process that opened a file for writing has decided to c

    - by djTeller
    Hi Kernel Gurus, I'm currently writing a simple "multicaster" module. Only one process can open a proc filesystem file for writing, and the rest can open it for reading. To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON. i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing. Currently in case someone is open for writing i save the current-pid of that process and when the .close callback is called I check if that process is the one I saved earlier. Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission... Thanks!

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Automating ftp command line application redirecting I/O in .Net

    - by SoMoS
    Hello, I'm trying to automate the ftp client that Windows includes redirecting the I/O from the process. What I'm doing is starting the process from my application and trying to read what the client prints on the screen and sending my commands to it. The problem is that I can not read almost any data sent by the ftp client. Some data is present but most data is not read. That's the code I have until now. Public Sub Start() process = New Diagnostics.Process() process.StartInfo.FileName = "ftp.exe" #'' The command is on the path process.StartInfo.CreateNoWindow = True process.StartInfo.RedirectStandardInput = True process.StartInfo.RedirectStandardOutput = True process.StartInfo.UseShellExecute = False process.Start() process.StandardInput.AutoFlush = True process.BeginOutputReadLine() End Sub #'' takes data from the stdout Private Sub process_OutputDataReceived(ByVal sender As Object, ByVal e As System.Diagnostics.DataReceivedEventArgs) Handles process.OutputDataReceived #'' At this moment here there is code to show the stdout at a textbox End Sub #'' sends data to stdin Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click process.StandardInput.WriteLine(Me.TextEdit1.Text) End Sub Now when I execute this for example and send ? I just get the first line (and I should get a lot more). Or when I send the open command I should receive an A but nothing is received. Any ideas? Another question is ... when a console applications writes on the screen it always does that by writing at the stdout or the stderr isn't it?

    Read the article

  • Bash Array Problem

    - by Deepak Prasanna
    I wrote a bash script which tries to find a process and run the process if it had stopped. This is the script. #!/bin/bash process=thin path=/home/deepak/abc/ initiate=thin start -d process_id=`ps -ef | pgrep $process | wc -m` if [ "$process_id" -gt "0" ]; then echo "The process process is running!!" else cd $path $initiate echo "Oops the process has stopped" fi This worked fine and I thought of using arrays so that i can form a loop use this script to check multiple processes. So I modified my script like this #!/bin/bash process[1]=thin path[1]=/home/deepak/abc/ initiate[1]=thin start -d process_id=`ps -ef | pgrep $process[1] | wc -m` if [ "$process_id" -gt "0" ]; then echo "Hurray the process ${process[1]} is running!!" else cd ${path[1]} ${initiate[1]} echo "Oops the process has stopped" echo "Continue your coffee, the process has been stated again! ;)" fi I get this error if i run this script. DontWorry.sh: 2: process[1]=thin: not found DontWorry.sh: 3: path[1]=/home/deepak/abc/: not found DontWorry.sh: 4: initiate[1]=thin start -d: not found I googled to find any solution for this, most them insisted to use "#!/bin/bash" instead of "#!/bin/sh". I tried both but nothing worked. What am i missing?

    Read the article

  • "RewriteBase: argument is not a valid URL" error

    - by user305434
    hi, I'm trying to configure .htaccess of my website. http://213.175.210.49/~incisozl/ is the temporary url to the root(~/public_html/). when I try to rewrite the url at .htaccess i get an /home/incisozl/public_html/.htaccess: RewriteBase: argument is not a valid URL, referer: ht tp://213.175.210.49/~incisozl/inci-sozluk/somestring error. my rewrite rule is; RewriteEngine On RewriteBase / RewriteRule ^/?$ /index.php [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk/([^.\?/]+)?$ /seo.php?process=word&q=$1 [L] RewriteRule ^inci-sozluk/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=word&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluktest/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordtest&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-bugun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordbg&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-dun/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=worddn&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-ters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=wordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2 [L] RewriteRule ^inci-sozluk-cvpters/([^.\?/]+)/([0-9]+)/([0-9]+)/?$ /seo.php?process=cvpwordts&q=$1&sayfa=$2&gid=$3 [L] RewriteRule ^inci-sozluk-ileti/([0-9]+)/?$ /seo.php?process=eid&eid=$1 [L] RewriteRule ^inci-sozluk-ileticvp/([0-9]+)/?$ /seo.php?process=cvpeid&eid=$1 [L] btw. it works fine when i use it with www.incisozluk.org pointed domain

    Read the article

  • Python subprocess Popen.communicate() equivalent to Popen.stdout.read()?

    - by Christophe
    Very specific question (I hope): What are the differences between the following three codes? (I expect it to be only that the first does not wait for the child process to be finished, while the second and third ones do. But I need to be sure this is the only difference...) I also welcome other remarks/suggestions (though I'm already well aware of the shell=True dangers and cross-platform limitations) Note that I already read Python subprocess interaction, why does my process work with Popen.communicate, but not Popen.stdout.read()? and that I do not want/need to interact with the program after. Also note that I already read Alternatives to Python Popen.communicate() memory limitations? but that I didn't really get it... First code: from subprocess import Popen, PIPE def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Second code: from subprocess import Popen, PIPE from subprocess import communicate def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) (stdout, stderr) = process.communicate() return process, stderr, stdout Third code: from subprocess import Popen, PIPE from subprocess import wait def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) code = process.wait() stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Thanks.

    Read the article

  • How to optimize an asp.net spawning a new process for each request ?

    - by Recycle Bin
    I have an asp.net mvc application that spawns a Process as follows: Process p = new Process(); p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.StartInfo.Arguments = "-interaction=nonstopmode " + inputpath; p.StartInfo.WorkingDirectory = dir; p.StartInfo.UseShellExecute = false; p.StartInfo.FileName = "pdflatex.exe"; p.StartInfo.LoadUserProfile = true; p.Start(); p.WaitForExit(); Before going further, I need to know whether, e.g., pdflatex.exe is a managed code or a native code? Edit I need to consider this because: (Hopely I am not wrong...) Each Asp.net application runs in an separate/isolated AppDomain as opposed to a separate/isolated process. A native executable cannot live in an AppDomain. to be continued... Shortly speaking, I hope my site does not spawn a new process for each request. Because a process is more expensive than an application domain.

    Read the article

  • How to process payments for a software (activation code)?

    - by jsoldi
    I want to sell software online and I need an easy to implement payment processing system. What I'm actually going to be selling is an activation code (one per purchase) that would activate the trial version of a product. I was about to use this one but I just found out that people without a paid email account (not hotmail or yahoo) can't process their orders, which I'm sure would discourage many, if not most, of the possible buyers.

    Read the article

  • Update in Certification Exam Score Report Access Process Now Live!

    - by Cinzia Mascanzoni
    Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. Learn more here.

    Read the article

  • Getting rsync to move file from source to destination ?

    - by fabien-barbier
    Is rsync is a good choice for my project ? I have to : - copy files from source to destination folder via SSH, - be sure all files are copied, - delete source files after copy. - if I have conflict name, I have to rename files. It looks like I can use option : --remove-source-files (to delete source files) But how rsync manage conflict, can I had rules ? Use case on my project : I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1. Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A. ...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B : - /process/calc1 - /process/calc2 (and /process/ on server A is empty). How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ? Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ? Thanks.

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Killing a process which ran for a lot of time or is using a lot of memory

    - by Vedant Terkar
    I am not sure whether this question belong to Stack Overflow or here, but here we go. I am designing a online 'C' compiler, which will compile and invoke the program if compilation succeeded. So here is code which I am using for that: $str=shell_exec("gcc path/to/file.c -o path/to/file.exe 2>&1"); if(file_exists("path/to/file.exe")){ $res=shell_exec("path/to/file.exe <inputfile 2>&1"); echo $res; } This Seems to work fine with simple program files. But When file.c That is the source code entered contains Infinite loop then This script crashes the server and utilizes a lot of memory and time. So here is my question: Is There any way to detect for how much time does the process file.exe is Running? How Much Space is Utilized by that process that is file.exe? Is There any way to kill the process file.exe if space and time utilization increases beyond certain limit? That Mean if we allocate time of 2.5sec and space of 40Mb at max for that process file.exe and if any one of those 2 constraints is violated then we should display appropriate error message to client Is it possible? I am Using WAMP (Windows 7).

    Read the article

  • A process serving application pool 'X' reported a failure. The process id was 'Y'. The data field c

    - by born to hula
    I have a WCF Web Service which is kept under an Application Pool on IIS. Lately I've been getting "Service Unavaiable" when I'm trying to make calls to this Web Service. The first thing I tried to do was restarting the Application Pool. I did it and after a couple of seconds, it crashed and stopped. Looking at the Event Viewer, I found these messages, which by the moment couldn't help me to find where the problem is. A process serving application pool 'X' reported a failure. The process id was '11616'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. After getting a couple of these, I got this one: Application pool 'X' is being automatically disabled due to a series of failures in the process(es) serving that application pool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I've already checked permissions and Application Pool configurations but everything seems to be OK. Have anyone been through this? Thanks in advance.

    Read the article

  • how to make a script for esxi vms rebooting process.

    - by user116374
    I'm Using ESXi 5.0 And I have created a lab for metasploit. one time my system exploited, so i have to restart my vms. so it's too much Headache. i have to execute lots of exploit. so i don't want to do the same restart process again and again. so maybe it is possible through the script. so again im stuck. I don't know how to write a script for this process. PLEASE tell me how to write a script for AUTOMATION restart process in 10 20 minute. And how to execute that script. Please Share your knowledge if you know. this script thing is very very useful for me. And also tell me any other way if you know. Ty. Alen.

    Read the article

  • How to safely remove a device blocked by the System process with a handle on \$Extend\$RmMetadata\$Txf

    - by Heinzi
    I have an external HDD which I would like to "safely remove". Unfortunately, my system (Windows 7 x64) complains that "the device is currently in use". Using Process Explorer I discovered which process is holding a handle on the device: Obviously, System is not a process that I can just kill and be done with it. I've done a bit of research and this seems to be a common problem, but no solution has been found so far (except for rebooting the machine, which I'd like to avoid if possible). Is there any solution to this problem that I've missed?

    Read the article

  • Can I send some text to the STDIN of an active process running in a screen session?

    - by Richard Gaywood
    I have a long-running server process inside a screen session on my Linux server. It's a bit unstable (and sadly not my software so I can't fix that!), so I want to script a nightly restart of the process to help stability. The only way to make it do a graceful shutdown is to go to the screen process, switch to the window it's running in, and enter the string "stop" on its control console. Are there any smart redirection contortions I can do to make a cronjob send that stop command at a fixed time every day?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >