Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 132/338 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Running log operation in Http Modules?

    - by Niranjan
    Hi, I have a simple requirement in which I want to execute a long running application program on server (e.g. DTSX) I want to make an HTTP module for this, But I have a question whether the DTSX will run even if the user closes the page and browser. In my case user hits the handler with a query string but what if the user closes the browser immediately? How is the behavior different from simple linear page processing? I want my DTSX package to finish once its started no matter how much time it takes and also dont want to halt the user that is why I am using http modules in place of linear asp page processing. Reagrds, Niranjan

    Read the article

  • python: how to jump to a particular line in a huge text file?

    - by photographer
    Are there any alternatives to the code below: startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 if I'm processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.

    Read the article

  • Lose changed data in session

    - by user150528
    Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request. I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist. Any thoughts?

    Read the article

  • Can a http server detect that a client has cancelled their request?

    - by Nick Retallack
    My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening. Is it possible to detect that the connection has been broken, and react to it? In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...

    Read the article

  • Post login execution

    - by Javi
    Hello, I need to do some processing only after the user has successfully logged in the system. I have thought that I can do a RESTful method and setting it as the default-target-url so when the login is successful it goes to this url and then I can redirect to the real index of my web application. <form-login login-page='/login.htm' default-target-url='/home.htm' always-use-default-target='true' /> The problem is that this processing can be executed by calling its URL so it could be executed by any user at any time. I want to make sure it is only executed after login. Is there any way to do this? Thank you very much.

    Read the article

  • Why is QProcess converting the '=' in my arguments to spaces

    - by dagorym
    I've run into a weird error with a Qt program running on Windows. The program uses QProcess to spawn a child process wit two arguments. The program and arguments passed to the QProcess::start() method are of the form: "batchfile.bat" "--option1=some_value" "--option2=some_other_value\with_a\path" For some reason by the time those options get to the batchfile for processing the equals signs have been converted to spaces and it now looks like: "batchfile.bat" "--option1 some_value" "--option2 some_other_value\with_a\path" because of this, the processing fails. Any ideas what could be causing the equal signs to be replaced by spaces? I'm using the mingw build of the QT 4.6.3 framework found on the Qt download page.

    Read the article

  • Respecting EXIF orientation when displaying iPhone photos on the web

    - by GingerBreadMane
    I am developing an iPhone camera app that uploads an image to Amazon S3 and that image is displayed on a website. When the iPhone takes a picture, it always saves the photo in an upright orientation, while the orientation used to correctly view the photo is saved in the image's EXIF data. So if I take a photo with the iPhone and open it in FireFox without processing the EXIF data, the image could be sideways or upside down. My problem is that I don't know how to display the photo in it's correct orientation on the website. My current solution is to rotate the photo in the iPhone app, but I'd rather not do that. Is there anyway to respect the EXIF data when displaying on the web without pre-processing the image?

    Read the article

  • How to write async background workers that work on WPF flowdocument

    - by iBe
    I'm trying to write a background worker that processes a flowdocument. I can't access the properties of flowdocument objects because of the thread verification. I tried to serialize the document and loaded it on the worker thread which actually solved the thread verfication issue. However, once the processing is complete I also need to use things like TextPointer objects. Those objects now point to a objects in the copy not the original. Can anyone suggest the best way to approach such background processing in WPF?

    Read the article

  • Controlling azure worker roles concurrency in multiple instance

    - by NER1808
    I have a simple work role in azure that does some data processing on an SQL azure database. The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing? I know I could set flags in the database, but am hoping there is another easier or better way to manage this. Thanks

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Navigating cursor rows in SQLite

    - by Alan Harris-Reid
    Hi there, I am trying to understand how the following builtin functions work when sequentially processing cursor rows. The descriptions come from the Python 3.1 manual (using SQLite3) Cursor.fetchone() Fetches the next row of a query result set, returning a single sequence. Cursor.fetchmany() Fetches the next set of rows of a query result, returning a list. Cursor.fetchall() Fetches all (remaining) rows of a query result, returning a list. So if I have a loop in which I am processing one row at a time using cursor.fetchone(), and some later code requires that I return to the first row, or fetch all rows using fetchall(), how do I do it? The concept is a bit strange to me, especially coming from a Foxpro background which has the concept of a record pointer which can be moved to the 1st or last row in a cursor (go top/bottom), or go to the nth row (go n) Any help would be appreciated. Alan

    Read the article

  • Detect aborted connection during ASIO request

    - by Tim Sylvester
    Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data? Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error. For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack. Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

    Read the article

  • php variable scope

    - by Illes Peter
    I have two files: index.php /lib/user.php Index contains the form: <div class="<? echo $msgclass; ?>"> <? echo $msg; ?> </div> <form id="signin" action="/lib/user.php" method="post"> ... </form> User.php makes all the processing. It sets $msg to 'some error message' and $msgalert to 'error' in case of any error. At the end of processing it uses header() to redirect to index.php But after redirection $msg and $msgalert get out of scope and index only gets empty vars. How can i fix this?

    Read the article

  • Python Imaging: YCbCr problems

    - by daver
    Hi, I'm doing some image processing in Python using PIL, I need to extract the luminance layer from a series of images, and do some processing on that using numpy, then put the edited luminance layer back into the image and save it. The problem is, I can't seem to get any meaningful representation of my Image in a YCbCr format, or at least I don't understand what PIL is giving me in YCbCr. PIL documentation claims YCbCr format gives three channels, but when I grab the data out of the image using np.asarray, I get 4 channels. Ok, so I figure one must be alpha. Here is some code I'm using to test this process: import Image as im import numpy as np pengIm = im.open("Data\\Test\\Penguins.bmp") yIm = pengIm.convert("YCbCr") testIm = np.asarray(yIm) grey = testIm[:,:,0] grey = grey.astype('uint8') greyIm = im.fromarray(grey, "L") greyIm.save("Data\\Test\\grey.bmp") I'm expecting a greyscale version of my image, but what I get is this jumbled up mess: http://i.imgur.com/zlhIh.png Can anybody explain to me where I'm going wrong? The same code in matlab works exactly as I expect.

    Read the article

  • Is there a way to change the maximum width of a window without using the WM_GETMINMAXINFO message?

    - by David
    I want to change the imposed Windows maximum width that a window can be resized to, for an external application's window (not my C#/WinForms program's window). The documentation of GetSystemMetrics for SM_CXMAXTRACK says: "The default maximum width of a window that has a caption and sizing borders, in pixels. This metric refers to the entire desktop. The user cannot drag the window frame to a size larger than these dimensions. A window can override this value by processing the WM_GETMINMAXINFO message." Is there a way to modify this SM_CXMAXTRACK value (either system wide or for one particular window), without processing the WM_GETMINMAXINFO message? Maybe an undocumented function, a registry setting, etc.? (Or: The documentation for MINMAXINFO.ptMaxTrackSize says: "This value is based on the size of the virtual screen and can be obtained programmatically from the system metrics SM_CXMAXTRACK and SM_CYMAXTRACK." Maybe there is a way to change the size of the virtual screen?) Thank you

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • What is the best way for communication between cluster nodes

    - by Tom
    I have an application written in a combination of ASP/VB6/VBScript and ASP.NET/C# that consists of a website part, SOAP-like webservice part and a queue processing part processing incoming files in a hotfolder. We are used to running under load balancers (Microsoft or other make). Often we need to communicate between the different load balanced servers. Currently we do this through the SQL Server database that is common for all nodes, however, this comes with a performance penalty as each message requires a transaction and continual polling from the other nodes. What would be better ways to achieve this? Tom, Appelby

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • Why does addSubview load the view asynchronously

    - by moshe
    I have a UIView that I want to load when the user clicks a button. There happens to be some data processing that happens as well after I call addSubview that involves parsing an XML file retrieved from the web. The problem is the view doesn't show up until after the data processing even if addSuview is called first. I think I'm missing something here, can anyone help? Code: I have a "Loading..." view I'm adding as a custom modal (meaning I'm not using the modalViewController). This action is linked to a button in the navigationController. - (IBAction)parseXml:(id)sender { LoadingModalViewController *loadingModal = [[LoadingModalViewController alloc] initWithNibName:@"LoadingModalViewController" bundle:nil]; [navigationController.view addSubview:loadingModal.view]; [xmlParser parse]; }

    Read the article

  • Java Annotations - Is there any helper library to read/process annotations?

    - by mjlee
    I start to use Java annotations heavily. One example is taking method with annotations and converting them into 'telnet'-based command-line command. I do this by parsing annotations and hook into jopt option parser. However, I do a lot of these manually. For example, Method parameter annotation processing.. Method method = ... //; Class[] parameters = method.getParamterTypes(); Annotation[][] annotations = method.getparamterAnnotations(); for( int i = 0; i < parameters.length; i++ ) { // iterate through the annotation , see if each param has specific annotation ,etc. } It is very redundant and tedious. Is there any opensource project that help processing Annotations?

    Read the article

  • How to connect to a network of activemq brokers from a client application?

    - by subh
    I have setup a network of brokers in activemq, how do i connect to that from my client application I tried with network:static:(tcp://master1.IP:61616,tcp://master2.IP:61617) and but I get the following exception javax.jms.JMSException: Uncategorized exception occured during JMS processing; nested exception is javax.jms.JMSException: Could not create Transport. Reason: java.io.IOException: Transport scheme NOT recognized: [network]; With static:(tcp://master1.IP:61616,tcp://master2.IP:61617) I get exception javax.jms.JMSException: Uncategorized exception occured during JMS processing; nested exception is javax.jms.JMSException: Could not create Transport. Reason: java.io.IOException: Transport scheme NOT recognized: [static]; Thanks

    Read the article

  • Request attributes in jsf / icefaces behaves strange (survive request end)

    - by hubertg
    I have the following code in a listener method: FacesContext.getCurrentInstance().getExternalContext().getRequestMap().put("time", new Date()); When a button is clicked the following code is executed System.out.println(FacesContext.getCurrentInstance().getExternalContext().getRequestMap().get("time")); One could except that "time" is null when the listener was not executed while processing the current request, but: it seems like the "time" object survives the request processing. So when "time" has been set sometimes in the past it stays there... can anybody explain this? Thanks.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >