Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 112/171 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • python multiprocess update dictionary synchronously

    - by user1050325
    I am trying to update one common dictionary through multiple processes. Could you please help me find out what is the problem with this code? I get the following output: inside function {1: 1, 2: -1} comes here inside function {1: 0, 2: 2} comes here {1: 0, 2: -1} Thanks. from multiprocessing import Lock, Process, Manager l= Lock() def computeCopyNum(test,val): l.acquire() test[val]=val print "inside function" print test l.release() return a=dict({1: 0, 2: -1}) procs=list() for i in range(1,3): p = Process(target=computeCopyNum, args=(a,i)) procs.append(p) p.start() for p in procs: p.join() print "comes here" print a

    Read the article

  • I/O Asynchronous Completion

    - by lockedscope
    In the following, it is said that an I/O handle must be associated with the thread pool but i could not find where in the given example an handle is associated with the thread. Which function or code helps to bind the file handle in that example? Using asynchronous I/O completion events, a thread from the thread pool processes data only when the data is received, and once the data has been processed, the thread returns to the thread pool. To make an asynchronous I/O call, an operating-system I/O handle must be associated with the thread pool and a callback method must be specified. When the I/O operation completes, a thread from the thread pool invokes the callback method. http://msdn.microsoft.com/en-us/library/aa720215(VS.71).aspx

    Read the article

  • split double in c wihout any libary

    - by DoomStone
    Hello I have a question for a c programmer out there, we have a tast at school to create a soft real time system in an operation system made by our teacher. Well that is all fine and dandy, we have chosen to create a system that calcuate how many unites of medicine a diabetic need based on his or hers blood sugar. It does not need to be correct just that we have the idea of a real time system :D But we have hit a little snag our formula for calculating the units of medicine is [blood sugar] * 1.2 But the only way we can send messages between processes is via a structure that contains 8 longs, but here is where my knowledge of c ends, we need for some way to split this double into 2 longs, example: the whole number in long 0 and the decimals in long 1 and then assemble it on the other side. But I have no idea who to do this, and therefore need a little help. We have tried but we do not have access to c standard libraries

    Read the article

  • Long running, polling, queueing process for Python. What's the best stuff to use?

    - by Bialecki
    Feel free to close and/or redirect if this has been asked, but here's my situation: I've got an application that will require doing a bunch of small units of work (polling a web service until something is done, then parsing about 1MB worth of XML and putting it in a database). I want to have a simple async queueing mechanism that'll poll for work to do in a queue, execute the units of work that need to be done, and have the flexibility to allow for spawning multiple worker processes so these units of work can be done in parallel. (Bonus if there's some kind of event framework that would also me to listen for when work is complete.) I'm sure there is stuff to do this. Am I describing Twisted? I poked through the documentation, I'm just not sure exactly how my problems maps onto their framework, but I haven't spent much time with it. Should I just look at the multiprocess libraries in Python? Something else?

    Read the article

  • Cannot save model due to bad transaction? Django

    - by Kenneth Love
    Trying to save a model in Django admin and I keep getting the error: Transaction managed block ended with pending COMMIT/ROLLBACK I tried restarting both the Django (1.2) and PostgreSQL (8.4) processes but nothing changed. I added "autocommit": True to my database settings but that didn't change anything either. Everything that Google has turned up has either not been answered or the answer involved not having records in the users table, which I definitely have. The model does not have a custom save method and there are no pre/post save signals tied to it. Any ideas or anything else I can provide to make answering this easier?

    Read the article

  • Time order of messages

    - by Aiden Bell
    Read (skimmed enough to get coding) through Erlang Programming and Programming Erlang. One question, which is as simple as it sounds: If you have a process Pid1 on machine m1 and a billion million messages are sent to Pid1, are messages handled in parallel by that process (I get the impression no) and(answered below) is there any guarantee of order when processing messages? ie. Received in order sent? If so, how is clock skew handled in high traffic situations for ordering? Coming from the whole C/Thread pools/Shared State background ... I want to get this concrete. I understand distributing an application, but want to ensure the 'raw bones' are what I expect before building processes and distributing workload. Also, am I right in thinking the whole world is currently flicking through Erlang texts ;)

    Read the article

  • Supervisor callback for child normal exit

    - by Aler
    I am creating a test app where is one supervisor with simple_one_for_one strategy and many worker children added dynamically to it. How to implement callback (or receive a message) in supervisor that will be called when child exit normally? Main goal is to notify some other process that all supervised worker processes are done and it's time to show final report. How to design such kind of behavior? Should I create my own behavior that combine supervisor and gen_server, or there is a way to do this with standard otp behaviors?

    Read the article

  • Passing multiple aruments to a UNIX shell script

    - by Waffles
    Hello all, I have the following (bash) shell script, that I would ideally use to kill multiple processes by name. #!/bin/bash kill `ps -A | grep $* | awk '{ print $1 }'` However, while this script works is one argument is passed: end chrome (the name of the script is end) it does not work if more than one argument is passed: $end chrome firefox grep: firefox: No such file or directory What is going on here? I thought the $* passes multiple arguments to the shell script in sequence. I'm not mistyping anything in my input - and I the programs I wan to kill (chrome and firefox) are open. Any help is appreciated.

    Read the article

  • Check if Rhythmbox is running via Python

    - by cschol
    I am trying to extract information from Rhythmbox via dbus, but I only want to do so, if Rhythmbox is running. Is there a way to check if Rhythmbox is running via Python without starting it if it is not running? Whenever I invoke the dbus code like this: bus = dbus.Bus() obj = bus.get_object("org.gnome.Rhythmbox", "/org/gnome/Rhythmbox/Shell") iface = dbus.Interface(obj, "org.gnome.Rhythmbox.Shell) and Rhythmbox is not running, it then starts it. Can I check via dbus if Rhythmbox is running without actually starting it? Or is there any other way, other than parsing the list of currently running processes, to do so?

    Read the article

  • Whats the best method for queuing time-sensitive messages with PHP/MySQL?

    - by Mike Diena
    I'm building an SMS call and response system in a new app that receives a message via an aggregator gateway, checks it for functional keywords (run, stop, ask, etc), then processes it appropriately (save to the database, return an answer, or execute a task based on the users authorization). It's running fine at the moment as there are only a few users, but I figure its going to have more issues as we scale it up. We're currently running it on a single DV machine (mediatemple base dv). My question is this: does it make more sense to set something up like Memcached to run a queue, or a simple database with a daemon running to process each message one by one? I don't have much experience with either, so any advice would be helpful. Since the messaging is somewhat time-sensitive, what would be the fastest and most reliable way to handle this? Also, since we're sending responses, I'll probably need to set up and outbound message queue as well. Would it make sense to use the same concept for both?

    Read the article

  • Executing commands on a Unix box from ASP .NET

    - by StefanE
    I'm in process to create a few utilities for my team to make life a bit easier working with our Unix boxes(most of them Solaris based). For example I'm creating a ASP .NET page to display the output of TOP. Also plan to be able to restart processes with the KILL -15 command. Now I wonder if there is any nice modules out the do the work for me or am I better off just going ahead with my own SSH communication? It would of course make sense building the app on the unix box directly but I'm not able to do this.

    Read the article

  • What's the best way to run a process remotely with capturing the output ?

    - by Homam
    Hi all, I have a windows service responsible for running processes on a remote machine with capturing the output, I googled for that and read a lot of articles and threads, I found two ways: 1) using WMI for call the remote process like this example: http://www.codeproject.com/KB/cs/EverythingInWmi02.aspx 2) using the PsExec tool by System.Diagnostic.Process class ever way of these has many problems, the WMI doesn't support returning the output and doesn't support "WaitingToExit" It just call the process and return the PId, and the PsExec couldn't capture the output programmatically by System.Diagnostic.Process in a clear way, I found only workarounds like redirect the output to a file then read the redirected file..etc, I need a real solution please

    Read the article

  • Including the functionality of a tool within another program?

    - by darren
    Hi there I would like to write an application, for my own interest, that graphically visualizes some network concepts. Basically I would like to show the output from tools like ping, traceroute and nmap. The most obvious approach seems to be to use pipes to call out to these tools from my C program, and process the information they return. However, I would like to avoid this heavy-handed approach if possible. My question is, is it possible to somehow link against these tools, or are there APIs that can be used to gain programatic access instead? If so, is this behavior available on a tool-by-tool basis only? One reason for wanting to do this is to keep everything in a single process / address space and to avoid dependance on these external tools. For example, if I wrote an iphone application, I would not be able to spawn processes to call out to the external tools themselves. Thanks for any advice or suggestions.

    Read the article

  • long running process in asp.net C#

    - by user339323
    Hello All, I have a web application that has a long running (resource intensive) process in the code behind and the end output is a pdf file (images to pdf conversion tool) It runs fine..and since I am on a dedicated server, it is not at all a problem with respect to resources right now. However, I wonder that the system would reach its resource limits if, there are more than 20 users processing at a time. I have seen services online where the user enters their email and the processes are, I suppose, queued in the background and the results emailed with the 1st in 1st out method. Can someone please give me a start on how to implement this kind of logic in asp.net applications using C#? Thanks a lot in advance, Prasad.

    Read the article

  • Python IDLE freezes

    - by ooboo
    This is absolutely frustrating, but I am not sure if the following is an issue only on my machine or with IDLE in general. When attempting to print a long list in the shell, and that could happen by accident while debugging, the program crushes and you have to restart it manually. Even worse, if you have a few editor windows open, it always spawns a few sub-processes, and each of these has to be manually shut down from the task manager. Is there any way to avoid that? I am using Python 3, by the way.

    Read the article

  • Using CreateFileMapping between two programs - C

    - by Jamie Keeling
    Hello, I have two window form applications written in C, one holds a struct consisting of two integers, another will receive it using the CreateFileMapping. Although not directly related I want to have three events in place so each of the processes can "speak" to each other, one saying that the first program has something to pass to the second, one saying the first one has closed and another saying the second one has closed. What would be the best way about doing this exactly? I've looked at the MSDN entry for the CreateFileMapping operation but I'm still not sure as to how it should be done. I didn't want to start implementing it without having some sort of clear idea as to what I need to do. Thanks for your time.

    Read the article

  • Erlang message loops

    - by Roger Alsing
    How does message loops in erlang work, are they sync when it comes to processing messages? As far as I understand, the loop will start by "receive"ing a message and then perform something and hit another iteration of the loop. So that has to be sync? right? If multiple clients send messages to the same message loop, then all those messages are queued and performed one after another, or? To process multiple messages in parallell, you would have to spawn multiple message loops in different processes, right? Or did I misunderstand all of it?

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • Xcodebuild failing to pick up environment values from project file?

    - by egrunin
    I'm using Xcode 3.2.6, MacOSX. I have a globally visible environment setting: ICU_SRC=~/Documents/icu/source This really is an environment setting, it's set at login time. When I open up Terminal, it's there. In my project, under Header Search Paths I've added this: $(ICU_SRC)/i18n $(ICU_SRC)/common These expand correctly when I compile inside the IDE. When I look at the build results, I see this: -I/Users/eric.grunin/Documents/icu/source/i18n -I/Users/eric.grunin/Documents/icu/source/common When I build from the command line, however, it fails. What I see is this: -I/i18n -I/common Here's the command I'm using to compile: /usr/bin/env -i xcodebuild -project my_project.xcodeproj -target "my_program" -configuration Release -sdk macosx10.6 build What am I doing wrong? Edited to add: Apple explains Setting environment variables for user processes

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Aspect Oriented Programming vs List<IAction> To execute methods based on conditions

    - by David Robbins
    I'm new to AOP so bear with me. Consider the following scenario: A state machine is used in a workflow engine, and after the state of the application is changed, a series of commands are executed. Depending on the state, different types of commands should be executed. As I see it, one implementation is to create List<IAction> and have each individual action determine whether it should execute. Would a Aspect Oriented process work as well? That is, could you create an aspect that notifies a class when a property changes, and execute the appropriate processes from that class? Would this help centralize the state specific rules?

    Read the article

  • Automated user testing with times

    - by Sean
    I am attempting to test user interaction with an off the shelf product. The current process is to run through a manual script and record how long certain processes take to populate data, it may populate a datagrid, a tree, a list box, etc... It will also need to be able record how long it takes to generate a pdf report. We do not require extremely accurate time just to the nearest second or less than a second. I need to know if there is a simple Automated testing product that will handle the above criteria.

    Read the article

  • Queue remote calls to a Python Twisted perspective broker?

    - by agartland
    The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes. As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks!

    Read the article

  • Java respawn process

    - by Bart van Heukelom
    I'm making an editor-like program. If the user chooses File-Open in the main window I want to start a new copy of the editor process with the chosen filename as an argument. However, for that I need to know what command was used to start the first process: java -jar myapp.jar blabalsomearguments // --- need this information Open File (fileUrl) exec("java -jar myapp.jar blabalsomearguments fileUrl"); I'm not looking for an in-process solution, I've already implemented that. I'd like to have the benefits that seperate processes bring.

    Read the article

  • thumbs.db messing up my upload routine

    - by Scott B
    I'm getting the following error while uploading a zip archive. Warning: ZipArchive::extractTo(C:\xampplite\htdocs\testsite/wp-content/themes/mytheme//styles\mytheme/Thumbs.db) [ziparchive.extractto]: failed to open stream: Permission denied in C:\xampplite\htdocs\testsite\wp-content\themes\mythem\uploader.php on line 17 The thing I can't quite figure is that I don't see a thumbs.db file in either the zip archive or the destination folder that was created (the upload still processes, I just get these errors). The function is below, line 17 is commented... function openZip($file_to_open) { global $target; $zip = new ZipArchive(); $x = $zip->open($file_to_open); if($x === true) { $zip->extractTo($target); //this is line 17 $zip->close(); unlink($file_to_open); } else { die("There was a problem. Please try again!"); } }

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >