Search Results

Search found 7081 results on 284 pages for 'idle processing'.

Page 97/284 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Connection problems - Celery/Django

    - by RadiantHex
    Hi folks, long night... can't get my second Celery/RabbitMQ setup run to work. step 1 sudo rabbitmq-server runs: ok! step 2 python manage.py celeryd -l info error: [2010-12-28 03:38:24,690: ERROR/MainProcess] CarrotListener: Connection Error: Socket closed. Trying again in 28 seconds... I have definitely: added rabbitmq user and vhost updated the Django setings.py Edit: I think it might have to with installing from a .deb instead of apt-get. After uninstalling the deb and installing the apt-get version I get this: invoke-rc.d: initscript rabbitmq-server, action "start" failed. dpkg: error processing rabbitmq-server (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: rabbitmq-server E: Sub-process /usr/bin/dpkg returned an error code (1) Any ideas on how I could debug this? :|

    Read the article

  • Measuring Web Page Performance on Client vs. Server

    - by Yaakov Ellis
    I am working with a web page (ASP.net 3.5) that is very complicated and in certain circumstances has major performance issues. It uses Ajax (through the Telerik AjaxManager) for most of its functionality. I would like to be able to measure in some way the amounts of time for the following, for each request: On client submitting request to server Client-to-Server On server initializing request On server processing request Server-to-Client Client rendering, JavaScript processing I have monitored the database traffic and cannot find any obvious culprit. On the other hand, I have a suspicion that some of the Ajax interactions are causing performance issues. However, until I have a way to track the times involved, make a baseline measurement, and measure performance as I tweak, it will be hard to work on the issue. So what is the best way to measure all of these? Is there one tool that can do it? Combination of FireBug and logging inserted into different places in the page life-cycle?

    Read the article

  • jQuery/Javascript framework efficiency

    - by Russell
    My latest project is using a javascript framework (jQuery), along with some plugins (validation, jquery-ui, datepicker, facebox, ...) to help make a modern web application. I am now finding pages loading slower than I am used to. After some js profiling (thanks VS2010!), it seems a lot of the time is taken procesing inside the framework. Now I understand the more complex the ui tools, the more processing needs to be done. The project is not yet at a large stage and I think would be average functions. At this stage I can see it is not going to scale well. I noticed things like the 'each' command in jQuery takes quite a lot of processing time. Have others experienced some extra latency using JS frameworks? How do I minimise their effect on page performance? Are there best practices on implementation using JS frameworks? Thanks

    Read the article

  • Converting a year from 4 digit to 2 digit and back again in C#

    - by Mike Wills
    My credit card processor requires I send a two-digit year from the credit card expiration date. Here is how I a currently processing: I put a DropDownList of the 4-digit year on the page. I validate the expiration date in a DateTime field to be sure that the expiration date being passed to the CC processor isn't expired. I send a two-digit year to the CC processor (as required). I do this via a substring of the value from the year DDL. Is there a method out there to convert a four-digit year to a two-digit year. I am not seeing anything on the DateTime object. Or should I just keep processing it as I am?

    Read the article

  • Running log operation in Http Modules?

    - by Niranjan
    Hi, I have a simple requirement in which I want to execute a long running application program on server (e.g. DTSX) I want to make an HTTP module for this, But I have a question whether the DTSX will run even if the user closes the page and browser. In my case user hits the handler with a query string but what if the user closes the browser immediately? How is the behavior different from simple linear page processing? I want my DTSX package to finish once its started no matter how much time it takes and also dont want to halt the user that is why I am using http modules in place of linear asp page processing. Reagrds, Niranjan

    Read the article

  • python: how to jump to a particular line in a huge text file?

    - by photographer
    Are there any alternatives to the code below: startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 if I'm processing a huge text file (~15MB) with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.

    Read the article

  • Lose changed data in session

    - by user150528
    Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request. I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist. Any thoughts?

    Read the article

  • Can a http server detect that a client has cancelled their request?

    - by Nick Retallack
    My web app must process and serve a lot of data to display certain pages. Sometimes, the user closes or refreshes a page while the server is still busy processing it. This means the server will continue to process data for several minutes only to send it to a client who is no longer listening. Is it possible to detect that the connection has been broken, and react to it? In this particular project, we're using Django and NginX, or Apache. I assumed this is possible because the Django development server appears to react to cancelled requests by printing Broken Pipe exceptions. I'd love to have it raise an exception that my application code could catch. Alternatively, I could register an unload event handler on the page in question, have it do a synchronous XHR requesting that the previous request from this user be cancelled, and do some kind of inter-process communication to make it so. Perhaps if the slower data processing were handed to another process that I could more easily identify and kill, without killing the responding process...

    Read the article

  • Why is QProcess converting the '=' in my arguments to spaces

    - by dagorym
    I've run into a weird error with a Qt program running on Windows. The program uses QProcess to spawn a child process wit two arguments. The program and arguments passed to the QProcess::start() method are of the form: "batchfile.bat" "--option1=some_value" "--option2=some_other_value\with_a\path" For some reason by the time those options get to the batchfile for processing the equals signs have been converted to spaces and it now looks like: "batchfile.bat" "--option1 some_value" "--option2 some_other_value\with_a\path" because of this, the processing fails. Any ideas what could be causing the equal signs to be replaced by spaces? I'm using the mingw build of the QT 4.6.3 framework found on the Qt download page.

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Post login execution

    - by Javi
    Hello, I need to do some processing only after the user has successfully logged in the system. I have thought that I can do a RESTful method and setting it as the default-target-url so when the login is successful it goes to this url and then I can redirect to the real index of my web application. <form-login login-page='/login.htm' default-target-url='/home.htm' always-use-default-target='true' /> The problem is that this processing can be executed by calling its URL so it could be executed by any user at any time. I want to make sure it is only executed after login. Is there any way to do this? Thank you very much.

    Read the article

  • Respecting EXIF orientation when displaying iPhone photos on the web

    - by GingerBreadMane
    I am developing an iPhone camera app that uploads an image to Amazon S3 and that image is displayed on a website. When the iPhone takes a picture, it always saves the photo in an upright orientation, while the orientation used to correctly view the photo is saved in the image's EXIF data. So if I take a photo with the iPhone and open it in FireFox without processing the EXIF data, the image could be sideways or upside down. My problem is that I don't know how to display the photo in it's correct orientation on the website. My current solution is to rotate the photo in the iPhone app, but I'd rather not do that. Is there anyway to respect the EXIF data when displaying on the web without pre-processing the image?

    Read the article

  • How to write async background workers that work on WPF flowdocument

    - by iBe
    I'm trying to write a background worker that processes a flowdocument. I can't access the properties of flowdocument objects because of the thread verification. I tried to serialize the document and loaded it on the worker thread which actually solved the thread verfication issue. However, once the processing is complete I also need to use things like TextPointer objects. Those objects now point to a objects in the copy not the original. Can anyone suggest the best way to approach such background processing in WPF?

    Read the article

  • Controlling azure worker roles concurrency in multiple instance

    - by NER1808
    I have a simple work role in azure that does some data processing on an SQL azure database. The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing? I know I could set flags in the database, but am hoping there is another easier or better way to manage this. Thanks

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Navigating cursor rows in SQLite

    - by Alan Harris-Reid
    Hi there, I am trying to understand how the following builtin functions work when sequentially processing cursor rows. The descriptions come from the Python 3.1 manual (using SQLite3) Cursor.fetchone() Fetches the next row of a query result set, returning a single sequence. Cursor.fetchmany() Fetches the next set of rows of a query result, returning a list. Cursor.fetchall() Fetches all (remaining) rows of a query result, returning a list. So if I have a loop in which I am processing one row at a time using cursor.fetchone(), and some later code requires that I return to the first row, or fetch all rows using fetchall(), how do I do it? The concept is a bit strange to me, especially coming from a Foxpro background which has the concept of a record pointer which can be moved to the 1st or last row in a cursor (go top/bottom), or go to the nth row (go n) Any help would be appreciated. Alan

    Read the article

  • Detect aborted connection during ASIO request

    - by Tim Sylvester
    Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data? Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error. For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack. Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

    Read the article

  • php variable scope

    - by Illes Peter
    I have two files: index.php /lib/user.php Index contains the form: <div class="<? echo $msgclass; ?>"> <? echo $msg; ?> </div> <form id="signin" action="/lib/user.php" method="post"> ... </form> User.php makes all the processing. It sets $msg to 'some error message' and $msgalert to 'error' in case of any error. At the end of processing it uses header() to redirect to index.php But after redirection $msg and $msgalert get out of scope and index only gets empty vars. How can i fix this?

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • Is there a way to change the maximum width of a window without using the WM_GETMINMAXINFO message?

    - by David
    I want to change the imposed Windows maximum width that a window can be resized to, for an external application's window (not my C#/WinForms program's window). The documentation of GetSystemMetrics for SM_CXMAXTRACK says: "The default maximum width of a window that has a caption and sizing borders, in pixels. This metric refers to the entire desktop. The user cannot drag the window frame to a size larger than these dimensions. A window can override this value by processing the WM_GETMINMAXINFO message." Is there a way to modify this SM_CXMAXTRACK value (either system wide or for one particular window), without processing the WM_GETMINMAXINFO message? Maybe an undocumented function, a registry setting, etc.? (Or: The documentation for MINMAXINFO.ptMaxTrackSize says: "This value is based on the size of the virtual screen and can be obtained programmatically from the system metrics SM_CXMAXTRACK and SM_CYMAXTRACK." Maybe there is a way to change the size of the virtual screen?) Thank you

    Read the article

  • Python Imaging: YCbCr problems

    - by daver
    Hi, I'm doing some image processing in Python using PIL, I need to extract the luminance layer from a series of images, and do some processing on that using numpy, then put the edited luminance layer back into the image and save it. The problem is, I can't seem to get any meaningful representation of my Image in a YCbCr format, or at least I don't understand what PIL is giving me in YCbCr. PIL documentation claims YCbCr format gives three channels, but when I grab the data out of the image using np.asarray, I get 4 channels. Ok, so I figure one must be alpha. Here is some code I'm using to test this process: import Image as im import numpy as np pengIm = im.open("Data\\Test\\Penguins.bmp") yIm = pengIm.convert("YCbCr") testIm = np.asarray(yIm) grey = testIm[:,:,0] grey = grey.astype('uint8') greyIm = im.fromarray(grey, "L") greyIm.save("Data\\Test\\grey.bmp") I'm expecting a greyscale version of my image, but what I get is this jumbled up mess: http://i.imgur.com/zlhIh.png Can anybody explain to me where I'm going wrong? The same code in matlab works exactly as I expect.

    Read the article

  • What is the best way for communication between cluster nodes

    - by Tom
    I have an application written in a combination of ASP/VB6/VBScript and ASP.NET/C# that consists of a website part, SOAP-like webservice part and a queue processing part processing incoming files in a hotfolder. We are used to running under load balancers (Microsoft or other make). Often we need to communicate between the different load balanced servers. Currently we do this through the SQL Server database that is common for all nodes, however, this comes with a performance penalty as each message requires a transaction and continual polling from the other nodes. What would be better ways to achieve this? Tom, Appelby

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >