Search Results

Search found 7175 results on 287 pages for 'asynchronous processing'.

Page 29/287 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Queue remote calls to a Python Twisted perspective broker?

    - by agartland
    The strength of Twisted (for python) is its asynchronous framework (I think). I've written an image processing server that takes requests via Perspective Broker. It works great as long as I feed it less than a couple hundred images at a time. However, sometimes it gets spiked with hundreds of images at virtually the same time. Because it tries to process them all concurrently the server crashes. As a solution I'd like to queue up the remote_calls on the server so that it only processes ~100 images at a time. It seems like this might be something that Twisted already does, but I can't seem to find it. Any ideas on how to start implementing this? A point in the right direction? Thanks!

    Read the article

  • Searching algorithmics: Parsing and processing a request

    - by James P.
    Say you were to create a search engine that can accept a query statement under the form of a String. The statement can be used to retrieve different types of objects with a given set of characteristics and possibly linked to other objects. In plain english or pseudo-code using an OOP approach, how would you go about parsing and processing statements as follows to get the series of desired objects ? get fruit with colour green get variety of apples, pears from Andy get strawberry with colour "deep red" and origin not Spain get total of sales of melons between 2010-10-10 and 2010-12-30 get last deliverydate of bananas from "Pete" and state not sold Hope the question is clear. If not I'll be more than happy to reformulate. P.S: This isn't homework ;)

    Read the article

  • C# TCP First Message Delay

    - by ikurtz
    greetings, i am writng a socket program using sockets in c# (asynchronous). the issue is, when a client connects to the server it kinda happens quiet fast. then.. when the first message is sent there is a delay in responding. this only happens to the very first data being sent over the connection. and boh client and server suffers from this behaviour. what is this delay? is there a way to get rid of this? many thanks.

    Read the article

  • What's the most scalable way to handle somewhat large file uploads in a Python webapp?

    - by Jason Baker
    We have a web application that takes file uploads for some parts. The file uploads aren't terribly big (mostly word documents and such), but they're much larger than your typical web request and they tend to tie up our threaded servers (zope 2 servers running behind an Apache proxy). I'm mostly in the brainstorming phase right now and trying to figure out a general technique to use. Some ideas I have are: Using a python asynchronous server like tornado or diesel or gunicorn. Writing something in twisted to handle it. Just using nginx to handle the actual file uploads. It's surprisingly difficult to find information on which approach I should be taking. I'm sure there are plenty of details that would be needed to make an actual decision, but I'm more worried about figuring out how to make this decision than anything else. Can anyone give me some advice about how to proceed with this?

    Read the article

  • Thread processing in EMS connection

    - by aladine
    I am setting up a client and exchange project and both are connecting to a remote server. Exchange will connect to the server by EMS connection. While client will connect by FIX. For the aim of building of black box testing, both client and exchange engine will be given some predefined testcases to send and receive to the server. I design the client engine with multithread processing to manipulate many testcases. Actually it is able to run succesfully. For exchange engine, I wonder that multi thread is applicable in the context that the exchange engine just need to publish a message when it received msg from subscribed topic on server. Flow of messages transmission: Client--SERVER--Exchange FIX EMS Exchange--SERVER--Client EMS FIX Thanks if you can help me on this issue.

    Read the article

  • processing a file full of unix time strings to human readble

    - by skymook
    I am processing a file full of unix time strings. I want to convert them all to human readable. The file looks like so: 1153335401 1153448586 1153476729 1153494310 1153603662 1153640211 Here is the script: #! /bin/bash FILE="test.txt" cat $FILE | while read line; do perl -e 'print scalar(gmtime($line)), "\n"' done This is not working. The output I get is Thu Jan 1 00:00:00 1970 for every line. I think the line breaks are being picked up and that is why it is not working. Any ideas? I'm using Mac OSX is that makes any difference.

    Read the article

  • Processing SMTP bounces with .net

    - by justSteve
    Am looking for examples specific to .net/mvc and servers native WinServer08 where problem being addressed is processing a bounced smtp msg so as to bind to an estore transaction and updating account/profile properties. Reading the related questions i find an interesting reference to [VERP]2. Under the heading 'Software that supports VERP i find that IIS is not on the list. Does that mean i need to find a library to integrate into my store's assembly? What resources do I have to pull together to make sure that the webapp is informed when mail bounces? fwiw - i'm working with a very low volume site.

    Read the article

  • Realtime processing and callbacks with Python and C++

    - by Doughy
    I need to write code to do some realtime processing that is fairly computationally complex. I would like to create some Python classes to manage all my scripting, and leave the intensive parts of the algorithm coded in C++ so that they can run as fast as possible. I would like to instantiate the objects in Python, and have the C++ algorithms chime back into the script with callbacks in python. Something like: myObject = MyObject() myObject.setCallback(myCallback) myObject.run() def myCallback(val): """Do something with the value passed back to the python script.""" pass Will this be possible? How can I run a callback in python from a loop that is running in a C++ module? Anyone have a link or a tutorial to help me do this correctly?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • How can I Submit client side answers (to a question) to the server using JAVA?

    - by mdrafi
    How can I Submit client side computer user's answers(to a multiple choice question) to the server using JAVA I have a centralized server and about 1000 client systems. In these 1000 systems students take multiple choice quiz at at time (in some 2 hrs time). Now i've to send all these answers of these questions to the server in an asynchronous threaded queue when the student answer each question (all 1000 students) Also client have to wait if the server connection is a failure, in this case students should be able to continue taking quiz/exam. When I get the connection these answers in queue should be submitted to the server system. How can I solve this problem? Please suggest/help me in this.

    Read the article

  • Asychronous page update with ASP.NET MVC

    - by Graham
    Hi, I'm learning ASP.NET MVC 1.0 and need to implement an asynchronous/dynamic page update. I'm new to MVC and jQuery so I'm not sure what to look for. What I want to do is to allow a user to start a monitoring a domain layer function (similar to a news ticker) and then do a partial page update based on the continously changing results. In ASP.NET I'd do this with a javascript timer to cause a postback, and an AJAX update panel..... but this seems a bit "hacky" for ASP.NET MVC. Is there a better way?

    Read the article

  • Rails controller processing as HTML instead of XML

    - by Andy
    I've recently upgraded from Ruby 1.8.6 and Rails 2.3.4 to Ruby 1.9 and Rails 3.0.3. I have the following controller: class ChartController < ApplicationController before_filter :login_required respond_to :html, :xml def load_progress chart.add( :series, "Memorized", y_memorized ) chart.add( :series, "Learning", y_learning ) chart.add( :series, "Mins / Day", y_time ) chart.add( :user_data, :secondary_y_interval, time_axis_interval ) respond_to do |fmt| fmt.xml { render :xml => chart.to_xml } end # Also tried # respond_with chart end end However, when I call the 'load_progress method' I get the following: Started GET "/load_progress.xml" for 127.0.0. Processing by ChartController#load_progress as HTML Completed 406 Not Acceptable in 251ms I have also tried changing the respond_to block to respond_with chart But I get the same response. I've read all the new Rails documentation on the new respond_with format but I can't seem to elicit an XML response. Am desperately hoping someone has some ideas.

    Read the article

  • Securing database keys for client-side processing

    - by danp
    I have a tree of information which is sent to the client in a JSON object. In that object, I don't want to have raw IDs which are coming from the database. I thought of making a hash of the id and a field in the object (title, for example) or a salt, but I'm worried that this might have a serious effect on processing overhead. SELECT * FROM `things` where md5(concat(id,'some salt')) = md5('1some salt'); Is there a standard practice for obscuring IDs in this kind of situation?

    Read the article

  • MessageBox not shown when opened processing WM_CLOSE from taskbar thumbnail close button

    - by Katana
    Trying to put up a "Do you want to save"-dialog when trying to close window with close-button in taskbar thumbnail in windows 7(with aero peek active). Using MessageBox() when processing WM_CLOSE does not work. MessageBox won't show until you move mouse cursor outside thumbnail so aero peek is disabled. Lots of applications have this buggy behaviour so it's probably a design flaw in Windows 7, but for some programs it works (Word, Notepad, Visual Studio, ...), so I'm wondering what trick they are using(or what it takes to "exit" aero peek-mode programmatically). The small "Sound Recorder" application that comes with Windows 7 has the same problem (if you have recorded something without saving and try to close it using thumbnail close-button)...

    Read the article

  • Can EventMachine recognize all threads are completed?

    - by philipjkim
    I'm an EM newbie and writing two codes to compare synchronous and asynchronous IO. I'm using Ruby 1.8.7. The example for sync IO is: def pause_then_print(str) sleep 2 puts str end 5.times { |i| pause_then_print(i) } puts "Done" This works as expected, taking 10+ seconds until termination. On the other hand, the example for async IO is: require 'rubygems' require 'eventmachine' def pause_then_print(str) Thread.new do EM.run do sleep 2 puts str end end end EventMachine.run do EM.add_timer(2.5) do puts "Done" EM.stop_event_loop end EM.defer(proc do 5.times { |i| pause_then_print(i) } end) end 5 numbers are shown in 2.x seconds. Now I explicitly wrote code that EM event loop to be stopped after 2.5 seconds. But what I want is that the program terminates right after printing out 5 numbers. For doing that, I think EventMachine should recognize all 5 threads are done, and then stop the event loop. How can I do that? Also, please correct the async IO example if it can be more natural and expressive. Thanks in advance.

    Read the article

  • How to reliably run a batch job every 5 seconds?

    - by Benjamin
    I'm building an application where the sending of all notifications (email, SMS, fax) will be asynchronous. The application will write the notifications to the database, and a batch job will read these notifications and send them with the appropriate transport. I was first reading at ways to run cron faster than the minute, and realized this was a bad idea. The batch scripts are written in PHP, and I guess that writing a proper daemon would be quite an overhead (though I'm open to any suggestion, as PHP car run indefinitely as well). What I have in mind is a solution that would: Run the PHP script every 5 seconds Check that the previous run has finished, or abort (never 2 concurrent batches running) Kill the script if live for more than x minutes (a security in case it hangs) Start with the system (if a reboot occurs) Any idea how to do this?

    Read the article

  • How to get around batch file processing limit

    - by Patrick Cuff
    I have a Windows batch file that processes all the files in a given directory. I have 206,783 files I need to process: for %%f in (*.xml) do call :PROCESS %%f goto :STOP :PROCESS :: do something with the file program.exe %1 > %1.new set /a COUNTER=%COUNTER%+1 goto :EOF :STOP @echo %COUNTER% files processed When I run the batch file, the following output is written: 65535 files processed As part of the processing, an output file is created for each file procesed, with a .new extension. When I do a dir *.new it reports 65,535 files exist. So, it appears my command environment has a hard limit on the number of files it can recognize, and that limit is 64K - 1. Is there a way to extend the command environment to manage more than 64K - 1 files? If not, would a VBScript or JavaScript be able to process all 206,783 files? I'm running on Windows 2003 server, Enterprise Edition, 32-bit. UPDATE It looks like the root cause of my issue was with the built-in Windows "extract" command for ZIP files. The files I have to process were copied from another system via a ZIP file. My server doesn't have a ZIP utility installed, just the native Windows commands. I right-clicked on the ZIP file, and did an "Extract all...", which apparently just extracted the first 65,535 files. I downloaded and installed 7-zip onto my server, unzipped all the files, and my batch script worked as intended.

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Processing more than one button click at Android Widget

    - by dive
    Hi, all. I saw this topic and implement IntentService as describes, but what if I want more that one button? How can I distinguish button from each other? I'm trying to setFlags, but cannot read it at onHandleIntent() method: public static class UpdateService extends IntentService { ... @Override public void onHandleIntent(Intent intent) { ComponentName me = new ComponentName(this, ExampleProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(this); manager.updateAppWidget(me, buildUpdate(this)); } private RemoteViews buildUpdate(Context context) { RemoteViews updateViews = new RemoteViews(context.getPackageName(), R.layout.main_layout); Intent i = new Intent(this, ExampleProvider.class); PendingIntent pi = PendingIntent.getBroadcast(context, 0, i, 0); updateViews.setOnClickPendingIntent(R.id.button_refresh, pi); i = new Intent(this, ExampleProvider.class); pi = PendingIntent.getBroadcast(context, 0, i, 0); updateViews.setOnClickPendingIntent(R.id.button_about, pi); return updateViews; } } At this little piece of code I have two PendingIntent linked with setOnClickPendingIntent, can I distinguish this intent for different actions and processing? Thanks for help

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • Processing velocity-vectors during collision as neatly as possible

    - by DevEight
    Hello. I'm trying to create a good way to handle all possible collisions between two objects. Typically one will be moving and hitting the other, and should then "bounce" away. What I've done so far (I'm creating a typical game where you have a board and bounce a ball at bricks) is to check if the rectangles intersect and if they do, invert the Y-velocity. This is a really ugly and temporary solution that won't work in the long haul and since this is kind of processing is very common in games I'd really like to find a great way of doing this for future projects aswell. Any links or helpful info is appreciated. Below is what my collision-handling function looks like right now. protected void collision() { #region Boundaries if (bal.position.X + bal.velocity.X >= viewportRect.Width || bal.position.X + bal.velocity.X <= 0) { bal.velocity.X *= -1; } if (bal.position.Y + bal.velocity.Y <= 0) { bal.velocity.Y *= -1; } #endregion bal.rect = new Rectangle((int)bal.position.X+(int)bal.velocity.X-bal.sprite.Width/2, (int)bal.position.Y-bal.sprite.Height/2+(int)bal.velocity.Y, bal.sprite.Width, bal.sprite.Height); player.rect = new Rectangle((int)player.position.X-player.sprite.Width/2, (int)player.position.Y-player.sprite.Height/2, player.sprite.Width, player.sprite.Height); if (bal.rect.Intersects(player.rect)) { bal.position.Y = player.position.Y - player.sprite.Height / 2 - bal.sprite.Height / 2; if (player.position.X != player.prevPos.X) { bal.velocity.X -= (player.prevPos.X - player.position.X) / 2; } bal.velocity.Y *= -1; } foreach (Brick b in brickArray.list) { b.rect.X = Convert.ToInt32(b.position.X-b.sprite.Width/2); b.rect.Y = Convert.ToInt32(b.position.Y-b.sprite.Height/2); if (bal.rect.Intersects(b.rect)) { b.recieveHit(); bal.velocity.Y *= -1; } } brickArray.removeDead(); }

    Read the article

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • Processing forms that generate many rows in DB

    - by Zack
    I'm wondering what the best approach to take here is. I've got a form that people use to register for a class and a lot of times the manager of a company will register multiple people for the class at the same time. Presently, they'd have to go through the registration process multiple times and resubmit the form once for every person they want to register. What I want to do is give the user a form that has a single <input/> for one person to register with, along with all the other fields they'll need to fill out (Email, phone number, etc); if they want to add more people, they'll be able to press a button and a new <input/> will be generated. This part I know how to do, but I'm including it to best describe what I'm aiming to do. The part I don't know how to approach is processing that data the form submits, I need some way of making a new row in the Registrant table for every <input/> that's added and include the same contact information (phone, email, etc) as the first row with that row. For the record, I'm using the Django framework for my back-end code. What's the best approach here? Should it just POST the form x times for x people, or is there a less "brute force" way of handling this?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >