Search Results

Search found 5597 results on 224 pages for 'worker processes'.

Page 30/224 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Python-daemon doesn't kill its kids

    - by Brian M. Hunt
    When using python-daemon, I'm creating subprocesses likeso: import multiprocessing class Worker(multiprocessing.Process): def __init__(self, queue): self.queue = queue # we wait for things from this in Worker.run() ... q = multiprocessing.Queue() with daemon.DaemonContext(): for i in xrange(3): Worker(q) while True: # let the Workers do their thing q.put(_something_we_wait_for()) When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids? My first thought is to use atexit to kill all the workers, likeso: with daemon.DaemonContext(): workers = list() for i in xrange(3): workers.append(Worker(q)) @atexit.register def kill_the_children(): for w in workers: w.terminate() while True: # let the Workers do their thing q.put(_something_we_wait_for()) However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done. Thank you.

    Read the article

  • Inside BackgroundWorker

    - by João Angelo
    The BackgroundWorker is a reusable component that can be used in different contexts, but sometimes with unexpected results. If you are like me, you have mostly used background workers while doing Windows Forms development due to the flexibility they offer for running a background task. They support cancellation and give events that signal progress updates and task completion. When used in Windows Forms, these events (ProgressChanged and RunWorkerCompleted) get executed back on the UI thread where you can freely access your form controls. However, the logic of the progress changed and worker completed events being invoked in the thread that started the background worker is not something you get directly from the BackgroundWorker, but instead from the fact that you are running in the context of Windows Forms. Take the following example that illustrates the use of a worker in three different scenarios: – Console Application or Windows Service; – Windows Forms; – WPF. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; using System.Windows.Threading; class Program { static AutoResetEvent Synch = new AutoResetEvent(false); static void Main() { var bw1 = new BackgroundWorker(); var bw2 = new BackgroundWorker(); var bw3 = new BackgroundWorker(); Console.WriteLine("DEFAULT"); var unspecializedThread = new Thread(() => { OutputCaller(1); SynchronizationContext.SetSynchronizationContext( new SynchronizationContext()); bw1.DoWork += (sender, e) => OutputWork(1); bw1.RunWorkerCompleted += (sender, e) => OutputCompleted(1); // Uses default SynchronizationContext bw1.RunWorkerAsync(); }); unspecializedThread.IsBackground = true; unspecializedThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WINDOWS FORMS"); var windowsFormsThread = new Thread(() => { OutputCaller(2); SynchronizationContext.SetSynchronizationContext( new WindowsFormsSynchronizationContext()); bw2.DoWork += (sender, e) => OutputWork(2); bw2.RunWorkerCompleted += (sender, e) => OutputCompleted(2); // Uses WindowsFormsSynchronizationContext bw2.RunWorkerAsync(); Application.Run(); }); windowsFormsThread.IsBackground = true; windowsFormsThread.SetApartmentState(ApartmentState.STA); windowsFormsThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WPF"); var wpfThread = new Thread(() => { OutputCaller(3); SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext()); bw3.DoWork += (sender, e) => OutputWork(3); bw3.RunWorkerCompleted += (sender, e) => OutputCompleted(3); // Uses DispatcherSynchronizationContext bw3.RunWorkerAsync(); Dispatcher.Run(); }); wpfThread.IsBackground = true; wpfThread.SetApartmentState(ApartmentState.STA); wpfThread.Start(); Synch.WaitOne(); } static void OutputCaller(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerAsync".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputWork(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "DoWork".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputCompleted(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerCompleted".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); Synch.Set(); } } Output: //DEFAULT //bw1.RunWorkerAsync | Thread: 3 | IsThreadPool: False //bw1.DoWork | Thread: 4 | IsThreadPool: True //bw1.RunWorkerCompleted | Thread: 5 | IsThreadPool: True //WINDOWS FORMS //bw2.RunWorkerAsync | Thread: 6 | IsThreadPool: False //bw2.DoWork | Thread: 5 | IsThreadPool: True //bw2.RunWorkerCompleted | Thread: 6 | IsThreadPool: False //WPF //bw3.RunWorkerAsync | Thread: 7 | IsThreadPool: False //bw3.DoWork | Thread: 5 | IsThreadPool: True //bw3.RunWorkerCompleted | Thread: 7 | IsThreadPool: False As you can see the output between the first and remaining scenarios is somewhat different. While in Windows Forms and WPF the worker completed event runs on the thread that called RunWorkerAsync, in the first scenario the same event runs on any thread available in the thread pool. Another scenario where you can get the first behavior, even when on Windows Forms or WPF, is if you chain the creation of background workers, that is, you create a second worker in the DoWork event handler of an already running worker. Since the DoWork executes in a thread from the pool the second worker will use the default synchronization context and the completed event will not run in the UI thread.

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • Jetty: Mount to a directory on a different host

    - by jettyQuestion
    I'm looking to map to a directory on a different host using Jetty/Maven when working locally. I've found you can do this w/ Apache using mod_jk (JkMount/JkUnMount), but haven't figured it how to do the same on jetty. On our dev/q/live servers, we have Apache in front of JBoss and use mod_jk to do this. Locally, we're using jetty To give you an idea of what I'm talking about, this is how you would configure Apache to accomplish this: in httpd.conf: JkMount /images/* host2 JkMount /* host2 JkUnMount /images/* host1 workers.properties: worker.list=host2,host1 worker.host2.host=host-2.theDomain.com worker.host2.port=46654 worker.host1.host=host-1.theDomain.com worker.host1.port=46655 Is there a way to configure Jetty to do the same thing? Btw, locally, I'm using the Maven plugin for Eclipse if that makes a difference. thanks!

    Read the article

  • background jobs and ssh connections

    - by petrelharp
    This question has come up quite a lot (really a lot), but I'm finding the answers to be generally incomplete. The general question is "Why does/doesn't my job get killed when I exit/kill ssh?", and here's what I've found. The first question is: How general is the following information? The following seems to be true for modern Debian linux, but I am missing some bits; and what do others need to know? All child processes, backgrounded or not of a shell opened over an ssh connection are killed with SIGHUP when the ssh connection is closed only if the huponexit option is set: run shopt huponexit to see if this is true. If huponexit is true, then you can use nohup or disown to dissociate the process from the shell so it does not get killed when you exit. If huponexit is false, which is the default on at least some linuxes these days, then backgrounded jobs will not be killed on normal logout. But even if huponexit is false, then if the ssh connection gets killed, or drops (different than normal logout), then backgrounded processes will still get killed. This can be avoided by disown or nohup as in (2). There is some distinction between (a) processes whose parent process is the terminal and (b) processes that have stdin, stdout, or stderr connected to the terminal. I don't know what happens to processes that are (a) and not (b), or vice versa. Final question: How can I avoid behavior (3)? In other words, by default in Debian backgrounded processes run along merrily by themselves after logout but not after the ssh connection is killed. I'd like the same thing to happen to processes regardless of whether the connection was closed normally or killed. Or, is this a bad idea?

    Read the article

  • passenger and apache memory usage

    - by Brent Faulkner
    On a "CentOS release 6.2 (Final)" server (with Ruby 1.9.3 and Rails 3.2), and using more memory than expected. Looking at passenger-memory-stats I see a couple of HUGE httpd processes... any thoughts on how I can figure out what's going on and reduce the memory usage? Stats are included here... thanks! ---------- Apache processes ----------- PID PPID VMSize Private Name --------------------------------------- 1371 1 202.1 MB 0.1 MB /usr/sbin/httpd 4573 1371 210.2 MB 5.0 MB /usr/sbin/httpd 4778 1371 202.5 MB 0.6 MB /usr/sbin/httpd 4780 1371 217.6 MB 9.4 MB /usr/sbin/httpd 4781 1371 217.1 MB 9.1 MB /usr/sbin/httpd 4856 1371 202.4 MB 0.5 MB /usr/sbin/httpd 4863 1371 204.1 MB 2.1 MB /usr/sbin/httpd 5027 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5043 1371 202.4 MB 0.4 MB /usr/sbin/httpd 5044 1371 205.5 MB 2.7 MB /usr/sbin/httpd 5072 1371 202.4 MB 0.5 MB /usr/sbin/httpd 5084 1371 202.4 MB 0.5 MB /usr/sbin/httpd 32111 1371 1297.0 MB 246.5 MB /usr/sbin/httpd 32579 1371 1914.3 MB 215.5 MB /usr/sbin/httpd ### Processes: 14 ### Total private dirty RSS: 493.42 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ----- PID VMSize Private Name ------------------------------- 4180 280.5 MB 24.4 MB Passenger ApplicationSpawner: /var/www/apps/people/current 4345 309.5 MB 53.4 MB Rack: /var/www/apps/people/current 4800 300.2 MB 55.2 MB Rack: /var/www/apps/people/current 4808 297.8 MB 52.5 MB Rack: /var/www/apps/people/current 4815 297.4 MB 52.4 MB Rack: /var/www/apps/people/current 4822 302.7 MB 55.6 MB Rack: /var/www/apps/people/current 22780 209.0 MB 0.0 MB PassengerWatchdog 22783 991.5 MB 1.3 MB PassengerHelperAgent 22785 113.4 MB 1.1 MB Passenger spawn server 22788 144.6 MB 0.0 MB PassengerLoggingAgent 22911 310.4 MB 64.0 MB Rack: /var/www/apps/people/current 22939 311.6 MB 53.5 MB Rack: /var/www/apps/people/current 26175 304.1 MB 55.8 MB Rack: /var/www/apps/people/current 26182 310.4 MB 44.0 MB Rack: /var/www/apps/people/current ### Processes: 14 ### Total private dirty RSS: 513.24 MB

    Read the article

  • RegLoadAppKey working fine on 32-bit OS, failing on 64-bit OS, even if both processes are 32-bit

    - by James Manning
    I'm using .NET 4 and the new RegistryKey.FromHandle call so I can take the hKey I get from opening a registry file with RegLoadAppKey and operate on it with the existing managed API. I thought at first it was just a matter of a busted DllImport and my call had an invalid type in the params or a missing MarshalAs or whatever, but looking at other registry functions and their DllImport declarations (for instance, on pinvoke.net), I don't see what else to try (I've had hKey returned as both int and IntPtr, both worked on 32-bit OS and fail on 64-bit OS) I've got it down to as simple a repro case as I can - it just tries to create a 'random' subkey then write a value to it. It works fine on my Win7 x86 box and fails on Win7 x64 and 2008 R2 x64, even when it's still a 32-bit process, even run from a 32-bit cmd prompt. EDIT: It also fails in the same way if it's a 64-bit process. on Win7 x86: INFO: Running as Admin in 32-bit process on 32-bit OS Was able to create Microsoft\Windows\CurrentVersion\RunOnceEx\a95b1bbf-7a04-4707-bcca-6aee6afbfab7 and write a value under it on Win7 x64, as 32-bit: INFO: Running as Admin in 32-bit process on 64-bit OS Unhandled Exception: System.UnauthorizedAccessException: Access to the registry key '\Microsoft\Windows\CurrentVersion\RunOnceEx\ce6d5ff6-c3af-47f7-b3dc-c5a1b9a3cd22' is denied. at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions) at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey) at LoadAppKeyAndModify.Program.Main(String[] args) on Win7 x64, as 64-bit: INFO: Running as Admin in 64-bit process on 64-bit OS Unhandled Exception: System.UnauthorizedAccessException: Access to the registry key '\Microsoft\Windows\CurrentVersion\RunOnceEx\43bc857d-7d07-499c-8070-574d6732c130' is denied. at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions) at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck) at LoadAppKeyAndModify.Program.Main(String[] args) source: class Program { static void Main(string[] args) { Console.WriteLine("INFO: Running as {0} in {1}-bit process on {2}-bit OS", new WindowsPrincipal(WindowsIdentity.GetCurrent()).IsInRole(WindowsBuiltInRole.Administrator) ? "Admin" : "Normal User", Environment.Is64BitProcess ? 64 : 32, Environment.Is64BitOperatingSystem ? 64 : 32); if (args.Length != 1) { throw new ApplicationException("Need 1 argument - path to the software hive file on disk"); } string softwareHiveFile = Path.GetFullPath(args[0]); if (File.Exists(softwareHiveFile) == false) { throw new ApplicationException("Specified file does not exist: " + softwareHiveFile); } // pick a random subkey so it doesn't already exist var keyPathToCreate = "Microsoft\\Windows\\CurrentVersion\\RunOnceEx\\" + Guid.NewGuid(); var hKey = RegistryNativeMethods.RegLoadAppKey(softwareHiveFile); using (var safeRegistryHandle = new SafeRegistryHandle(new IntPtr(hKey), true)) using (var appKey = RegistryKey.FromHandle(safeRegistryHandle)) using (var runOnceExKey = appKey.CreateSubKey(keyPathToCreate)) { runOnceExKey.SetValue("foo", "bar"); Console.WriteLine("Was able to create {0} and write a value under it", keyPathToCreate); } } } internal static class RegistryNativeMethods { [Flags] public enum RegSAM { AllAccess = 0x000f003f } private const int REG_PROCESS_APPKEY = 0x00000001; // approximated from pinvoke.net's RegLoadKey and RegOpenKey // NOTE: changed return from long to int so we could do Win32Exception on it [DllImport("advapi32.dll", SetLastError = true)] private static extern int RegLoadAppKey(String hiveFile, out int hKey, RegSAM samDesired, int options, int reserved); public static int RegLoadAppKey(String hiveFile) { int hKey; int rc = RegLoadAppKey(hiveFile, out hKey, RegSAM.AllAccess, REG_PROCESS_APPKEY, 0); if (rc != 0) { throw new Win32Exception(rc, "Failed during RegLoadAppKey of file " + hiveFile); } return hKey; } }

    Read the article

  • Boost.MultiIndex: Are there way to share object between two processes?

    - by Arman
    Hello, I have a Boost.MultiIndex big array about 10Gb. In order to reduce the reading I thought there should be a way to keep the data in the memory and another client programs will be able to read and analyse it. What is the proper way to organize it? The array looks like: struct particleID { int ID;// real ID for particle from Gadget2 file "ID" block unsigned int IDf;// postition in the file particleID(int id,const unsigned int idf):ID(id),IDf(idf){} bool operator<(const particleID& p)const { return ID<p.ID;} unsigned int getByGID()const {return (ID&0x0FFF);}; }; struct ID{}; struct IDf{}; struct IDg{}; typedef multi_index_container< particleID, indexed_by< ordered_unique< tag<IDf>, BOOST_MULTI_INDEX_MEMBER(particleID,unsigned int,IDf)>, ordered_non_unique< tag<ID>,BOOST_MULTI_INDEX_MEMBER(particleID,int,ID)>, ordered_non_unique< tag<IDg>,BOOST_MULTI_INDEX_CONST_MEM_FUN(particleID,unsigned int,getByGID)> > > particlesID_set; Any ideas are welcome. kind regards Arman.

    Read the article

  • How about the Asp.net processes and threads and apppools?

    - by Michel
    Hi, as i understand, when i load a asp.net .aspx page on the (iis)server, it's processed via the w3p.exe process. But when iis gets multiple requests, are they all processed by the same w3p process? And does this process automaticly use all my processors and cores? And after that: when i start i new thread in my page, this thread still works when the pages is already served to the client. Where does this thread live? also in the w3p.exe process? And what if i assign another apppool to my site, what does that do? Michel

    Read the article

  • How do I fork a maximum of 5 child processes of the parent at any one time?

    - by bstullkid
    I have the following code, which I'm trying to only allow a maximum of 5 children to run at a time, but I can't figure out how to decrement the child count when a child exits. struct { char *s1; char *s2; } s[] = { {"one", "oneB"}, {"two", "twoB"}, {"three", "thr4eeB"}, {"asdf", "3th43reeB"}, {"asdfasdf", "thr33eeB"}, {"asdfasdfasdf", "thdfdreeB"}, {"af3c3", "thrasdfeeB"}, {"fec33", "threfdeB"}, {NULL, NULL} }; int main(int argc, char *argv[]) { int i, im5, children = 0; int pid = fork(); for (i = 0; s[i].s2; i++) { im5 = 0; switch (pid) { case -1: { printf("Error\n"); exit(255); } case 0: { printf("%s -> %s\n", s[i].s1, s[i].s2); if (i==5) im5 = 1; printf("%d\n", im5); sleep(i); exit(1); } default: { // Here is where I need to sleep the parent until chilren < 5 // so where do i decrement children so that it gets modified in the parent process? while(children > 5) sleep(1); children++; pid = fork(); } } } return 1; }

    Read the article

  • How to synchronize placing an object in JNDI across processes?

    - by indra
    I am trying to place an object in JNDI, so that only one of the progam should be able to place it in JNDI. is there any global lock that can be used in J2EE environment. Is RMI can be used for this purpose? please provide any reference links. Thanks in advance. Also, what is NameAlreadyBoundexception? I am trying to use it as a method to synchronize, i.e, only one program places it in JNDI and if other trying to bind should get that exception. But when i am testing the multiple binding I am not getting the Exception.And second binding is done. look up is giving the second object bound. here is my code: private static String JNDI_NAME = "java:comp/env/test/something"; public class TestJNDI { private static String JNDI_NAME = "java:comp/env/test/something"; public static void main(String[] args) throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory"); env.put(Context.PROVIDER_URL,"t3://127.0.0.1:7001"); Context ctx = new InitialContext(env); System.out.println("Initial Context created"); String obj1 = "obj1"; String obj2 = "obj2"; try{ ctx.bind(JNDI_NAME, obj1); System.out.println("Bind Sucess"); }catch(NameAlreadyBoundException ne ){ // already bound System.out.println("Name already bound"); } ctx.close(); Context ctx2 = new InitialContext(env); try{ // Second binding to the same name not giving the Exception?? ctx2.bind(JNDI_NAME, obj2); System.out.println("Re Bind Sucess"); }catch(NameAlreadyBoundException ne ){ // already bound System.out.println("Name already bound"); } String lookedUp = (String) ctx2.lookup(JNDI_NAME); System.out.println("LookedUp Object"+lookedUp); ctx2.close(); } }

    Read the article

  • On WindowsMobile, how can i tell what other processes are reserving shared memory space?

    - by glutz78
    On WindowMobile 6.1, I am using VirtualAlloc to reserve 2MB chunks, which will return me an address from the large shared memory area so allocations do not count against my per process virtual space. (doc here: http://msdn.microsoft.com/en-us/library/aa908768.aspx) However, on some devices i notice that I am not able to reserve memory after a certain point. VirtualAlloc will return NULL (getlasterror() says out of memory). The only explanation for this that I see is that another process has already reserved a bunch of memory and my process is therefore unable to. Any idea where I can find a tool to show me the shared mem region of a WM device? Thanks.

    Read the article

  • Erlang: How to view output of io:format/2 calls in processes spawned on remote nodes.

    - by jkndrkn
    Hello, I am working on a decentralized Erlang application. I am currently working on a single PC and creating multiple nodes by initializing erl with the -sname flag. When I spawn a process using spawn/4 on its home node, I can see output generated by calls io:format/2 within that process in its home erl instance. When I spawn a process remotely by using spawn/4 in combination with register_name, output of io:format/2 is sometimes redirected back to the erl instance where the remote spawn/4 call was made, and sometimes remains completely invisible. Similarly, when I use rpc:call/4, output of io:format/2 calls is redirected back to the erl instance where the `rpc:call/4' call is made. How do you get a process to emit debugging output back to its parent erl instance?

    Read the article

  • APACHE2.2/WIN2003(32-bit)/PHP: How do I configure Apache to Run Background PHP Processes on Win 2003

    - by Captain Obvious
    I have a script, testforeground.php, that kicks off a background script, testbackground.php, then returns while the background script continues to run until it's finished. Both the foreground and background scripts write to the output file correctly when I run the foreground script from the command line using php-cgi: C:\>php-cgi testforeground.php The above command starts a php-cgi.exe process, then a php-win.exe process, then closes the php-cgi.exe almost immediately, while the php-win.exe continues until it's finished. The same script runs correctly but does not have permission to write to the output file when I run it from the command line using plain php: C:\>php testforeground.php AND when I run the same script from the browser, instead of php-cgi.exe, a single cmd.exe process opens and closes almost instantly, only the foreground script writes to the output file, and it doesn't appear that the 2nd process starts: http://XXX/testforeground.php Here is the server info: OS: Win 2003 32-bit HTTP: Apache 2.2.11 PHP: 5.2.13 Loaded Modules: core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_dir mod_env mod_include mod_isapi mod_log_config mod_mime mod_negotiation mod_setenvif mod_userdir mod_php5 Here's the foreground script: <?php ini_set("display_errors",1); error_reporting(E_ALL); echo "<pre>loading page</pre>"; function run_background_process() { file_put_contents("0testprocesses.txt","foreground start time = " . time() . "\n"); echo "<pre> foreground start time = " . time() . "</pre>"; $command = "start /B \"{$_SERVER['CMS_PHP_HOMEPATH']}\php-cgi.exe\" {$_SERVER['CMS_HOMEPATH']}/testbackground.php"; $rp = popen($command, 'r'); if(isset($rp)) { pclose($rp); } echo "<pre> foreground end time = " . time() . "</pre>"; file_put_contents("0testprocesses.txt","foreground end time = " . time() . "\n", FILE_APPEND); return true; } echo "<pre>calling run_background_process</pre>"; $output = run_background_process(); echo "<pre>output = $output</pre>"; echo "<pre>end of page</pre>"; ?> And the background script: <?php $start = "background start time = " . time() . "\n"; file_put_contents("0testprocesses.txt",$start, FILE_APPEND); sleep(10); $end = "background end time = " . time() . "\n"; file_put_contents("0testprocesses.txt", $end, FILE_APPEND); ?> I've confirmed that the above scripts work correctly using Apache 2.2.3 on Linux. I'm sure I just need to change some Apache and/or PHP config settings, but I'm not sure which ones. I've been muddling over this for too long already, so any help would be appreciated.

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • How do I ensure a Flex dataProvider processes the data synchronously?

    - by Matt Calthrop
    I am using an component, and currently have a dataProvider working that is an ArrayCollection (have a separate question about how to make this an XML file... but I digress). Variable declaration looks like this: [Bindable] private var _dpImageList : ArrayCollection = new ArrayCollection([ {"location" : "path/to/image1.jpg"}, {"location" : "path/to/image2.jpg"}, {"location" : "path/to/image3.jpg"} ]); I then refer to like this: <s:List id="lstImages" width="100%" dataProvider="{_dpImageList}" itemRenderer="path.to.render.ImageRenderer" skinClass="path.to.skins.ListSkin" > <s:layout> <s:HorizontalLayout gap="2" /> </s:layout> </s:List> Currently, it would appear that each item is processed asynchronously. However, I want them to be processed synchronously. Reason: I am displaying a list of images, and I want the leftmost one rendered first, followed by the one to its right, and so on. Edit: I just found this answer. Do you think that could be the same issue?

    Read the article

  • Is there a guide to debugging Java processes in Eclipse across OSs?

    - by Jekke
    I have an application written in Java to run on Linux. I'm developing in Eclipse under windows. I would like to run the code on the Linux box and debug it on the Windows one remotely. I've found some information about how to do so, but it's pretty sparse. Does anyone have (or can point to) a complete explanation of the process? Any help would be appreciated.

    Read the article

  • Java: Check what processes are bound to a port?

    - by bguiz
    Hi, I am developing an application in Netbeans, and it is using JavaDB. I can connect to it and execute queries without issues, but for some reason, the "Output - JavaDB Database Process" pane within Netbeans keeps displaying Security manager installed using the Basic server security policy. Could not listen on port 1527 on host localhost: java.net.BindException: Address already in use How do I find out what process is already using, or bound to that port? On Ubuntu Karmic, Netbeans 6.7.1

    Read the article

  • Implement a server that receives and processes client request(cassandra as backend), Python or C++?

    - by Mickey Shine
    I am planning to build an inverted index searching system with cassandra as its storage backend. But I need some guidances to build a highly efficient searching daemon server. I know a web server written in Python called tornado, my questions are: Is Python a good choice for developing such kind of app? Is Nginx(or Sphinx) a good example that I can look inside to learn its architecture to implement a highly efficient server? Anything else I should learn to do this? Thank you~

    Read the article

  • noob: how to show login error message on the same page after php server processes request.

    - by funbar
    Hi, I have a login page. User first enters information and submits the form. And I have a php script that will see if the user exists. If( authenticated == true) { // I do a redirect } else { // I want to popup an error message on the same page. } 1) I'm not sure how to show the popup message, I would like to make my div element visible with an error message returned from the server, I would have to use Ajax, right? But how? Or are there alternatives which are just as good.

    Read the article

  • How do I bring a processes window to the foreground on X Windows? (C++)

    - by Lorenz03Tx
    I have the PID for the process (and the name), I want to bring it to the front on linux (ubuntu). On mac I would simply do SetFrontProcess(pid), on windows I'd enumerate the windows pick out the one I wanted and call SetWindowPos(hwnd, HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE); but I'm at a loss of what to do on linux. I've looked at X Lib a bit, but most/all of those functions seem to operate on windows inside your process. Thanks in advanced.

    Read the article

  • can upstart expect/respawn be used on processes that fork more than twice?

    - by johnjamesmiller
    I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround? how it fails If I try to use expect daemon or expect fork upstart uses the pid of the second fork. When I try to stop the job nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one. bug acknowledged by upstream A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. rhel is close to 2 years behind the latest upstart package so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >