Search Results

Search found 7081 results on 284 pages for 'idle processing'.

Page 79/284 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • ASP.NET / WCF - Execute Server.Execute Asynchronously

    - by user208662
    Hello, I need to run the HttpContext.Current.Server.Execute method in my ASP.NET application. This application has a WCF operation that does some processing. Currently, I am to do my processing correctly from within my WCF operation. However, I would like to do this asynchronously. In an error to attempt this asynchronously, I tried running Server.Execute in the DoWork event handler of a BackgroundWorker. Unfortunately, this throws an error that says "object reference not set to an instance of an object" The HttpContext element is not null. I checked that. It is some property nested in the HttpContext object that appears to be null. However, I have not been able to identify why this won't work. It happens as soon as I move the processing to the BackgroundWorker thread. My question is, how can I asynchronously execute the Server.Execute method? Thank you,

    Read the article

  • C# threading pattern that will let me flush

    - by Jeff Alexander
    I have a class that implements the Begin/End Invocation pattern where I initially used ThreadPool.QueueUserWorkItem() to do thread my work. I now have the side effect where someone using my class is calling the Begin (with callback) a ton of times to do a lot of processing so ThreadPool.QueueUserWorkItem is creating a ton of threads to do the processing. That in itself isn't bad but there are instances where they want to abandon the processing and start a new process but they are forced to wait for their first request to finish. Since ThreadPool.QueueUseWorkItem() doesn't allow me to cancel the threads I am trying to come up with a better way to queue up the work and maybe use an explicit FlushQueue() method in my class to allow the caller to abandon work in my queue. Anyone have any suggestion on a threading pattern that fits my needs?

    Read the article

  • Removing the contents of a Chan or MVar in a single discrete step

    - by Bill
    I'm writing a discrete simulation where request values from multiple threads accumulate in a centralized queue. Every n milliseconds, a manager wakes up to process requests. When the manager wakes up, it should retrieve all of the contents of the central queue in a single discrete step. While processing these, any client threads attempting to submit to the queue should block. When processing completes, the queue reopens and the manager goes back to sleep. What's the best way to do this? The retry behavior of STM isn't really what I want. If I use a Chan or MVar, there's no way to prevent clients from enqueuing additional requests during processing. One approach is to use an MVar as a mutex on a Chan holding the queue. Are there other ways to do this?

    Read the article

  • How to outperform this regex replacement?

    - by spender
    After considerable measurement, I have identified a hotspot in one of our windows services that I'd like to optimize. We are processing strings that may have multiple consecutive spaces in it, and we'd like to reduce to only single spaces. We use a static compiled regex for this task: private static readonly Regex regex_select_all_multiple_whitespace_chars = new Regex(@"\s+",RegexOptions.Compiled); and then use it as follows: var cleanString= regex_select_all_multiple_whitespace_chars.Replace(dirtyString.Trim(), " "); This line is being invoked several million times, and is proving to be fairly intensive. I've tried to write something better, but I'm stumped. Given the fairly modest processing requirements of the regex, surely there's something faster. Could unsafe processing with pointers speed things further?

    Read the article

  • How to implement paging in NHibernate with a left join query

    - by Gabe Moothart
    I have an NHibernate query that looks like this: var query = Session.CreateQuery(@" select o from Order o left join o.Products p where (o.CompanyId = :companyId) AND (p.Status = :processing) order by o.UpdatedOn desc") .SetParameter("companyId", companyId) .SetParameter("processing", Status.Processing) .SetResultTransformer(Transformers.DistinctRootEntity); var data = query.List<Order>(); I want to implement paging for this query, so I only return x rows instead of the entire result set. I know about SetMaxResults() and SetFirstResult(), but because of the left join and DistinctRootEntity, that could return less than x Orders. I tried "select distinct o" as well, but the sql that is generated for that (using the sqlserver 2008 dialect) seems to ignore the distinct for pages after the first one (I think this is the problem). What is the best way to accomplish this?

    Read the article

  • How to give highest priority to events generated from main thread than those generated from secondar

    - by martjno
    I have a c++ application written in wxWidgets, which has a main thread (GUI) and a working thread (calculations). The working thread executes commands requested by the main thread and communicates the result to the main thread posting an event after every step of the processing. The problem is that when the working thread is sending many events consecutively, the gui requests made by the user (i.e. interrupt the processing clicking a button) won't be processed by the event handler until the working thread has finished. This is actually happening on OSX, on Windows it works perfectly. I've tried to wxThread::SetPriority and wxThread::Yield but nothing changes. It is working if I put wxThread::Sleep in the working thread, but this slows down very much the processing.

    Read the article

  • .NET: What is the purpose of the ProhibitDtd property in XmlReaderSettings? Why is DTD a security i

    - by Cheeso
    The documentation says: When set to true, the XmlReader throws an XmlException when any DTD content is encountered. Do not enable DTD processing if you are concerned about Denial of Service issues or if you are dealing with untrusted sources. If you have DTD processing enabled, you can use the XmlSecureResolver to restrict the resources that the XmlReader can access. You can also design your application so that the XML processing is memory and time constrained. For example, configure time-out limits in your ASP.NET application. Can someone please explain the issue? Why would a reader application want to prohibit the retrieval of a DTD? Where is the denial-of-service issue, if it is a reading application? What is the "trust" issue that is mentioned? Thanks

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • .Remove(object) on a List<T> returned from LINQ to SQL compiled query won't delete the Object right

    - by soldieraman
    I am returning two lists from the database using LINQ to SQL compiled query. While looping the first list I remove duplicates from the second list as I dont want to process already existing objects again. eg. //oldCustomers is a List returned by my Compiled Linq to SQL Statmenet that I have added a .ToList() at the end to //Same goes for newCustomers for (Customer oC in oldCustomers) { //Do some processing newCustomers.Remove(newCusomters.Find(nC=> nC.CustomerID == oC.CusomterID)); } for (Cusomter nC in newCustomers) { //Do some processing } DataContext.SubmitChanges() I expect this to only save the changes that have been made to the customers in my processing and not Remove or Delete any of my customers from the database. Correct? I have tried it and it works fine - but I am trying to know if there is any rare case it might actually get removed

    Read the article

  • Lightweight messaging (async invocations) in Java

    - by Sergey Mikhanov
    I am looking for lightweight messaging framework in Java. My task is to process events in a SEDA’s manner: I know that some stages of the processing could be completed quickly, and others not, and would like to decouple these stages of processing. Let’s say I have components A and B and processing engine (be this container or whatever else) invokes component A, which in turn invokes component B. I do not care if execution time of component B will be 2s, but I do care if execution time of component A is below 50ms, for example. Therefore, it seems most reasonable for component A to submit a message to B, which B will process at the desired time. I am aware of different JMS implementations and Apache ActiveMQ: they are too heavyweight for this. I searched for some lightweight messaging (with really basic features like messages serialization and simplest routing) to no avail. Do you have anything to recommend in this issue?

    Read the article

  • RadGrid Ajax PopAlert

    - by user272671
    I have a situation where I need to, based on user selection and some server-side processing, display a message to allow user to choose to continue processing or cancel. I have a RadGrid populated with data from the database. When User adds a new item to the grid,I want to do some processing in the back end and then inform the user of what could result and give him/her the choice to continue and believe that a message box or modal popup/radalert is the best way to do it, but how do I create the message in the back end and then using a popup, display the message and block until user responds. How do I do it please?

    Read the article

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Struggling to "clear" a CGLayer -- can it even be done?

    - by Joe Blow
    So I'm doing this repetitively - making a CGLayer, doing some processing, and then releasing it. This happens a lot in real time -- so surely there is a lot of overhead in making a whole new CGLayer each time? Surely it would be better to just keep lair around, and start fresh each time? However, I do not know any way, at all, to "erase" or "start from blank" a CGLayer?? Can anyone help on this? There is a function CGContextBeginPath(cc) but it's confusing: it seems to only clear out "that" path, it does not erase all of the CGLayer back to a blank canvas. how to return a CGLayer to a blank canvas????? Does anyone know? CGLayerRef lair = CGLayerCreateWithContext( UIGraphicsGetCurrentContext(), CGSizeMake(1024,768), NULL); CGContextRef cc = CGLayerGetContext(ether); // various processing here CGContextAddPath(cc, somePath); // various processing here CGLayerRelease(lair); Any ideas?!

    Read the article

  • Is there any memory restrictions on an ASP.Net application? HttpHandler?

    - by tpower
    I have an ASP.Net MVC application that allows users to upload images. When I try to upload a really large file (400MB) I get an error. I assumed that my image processing code (home brew) was very inefficient, so I decided I would try using a third party library to handle the image processing parts. Because I'm using TDD, I wanted to first write a test that fails. But when I test the controller action with the same large file it is able to do all the image processing without any trouble. The error I get is "Out of memory". I'm sure my code is probably using a lot more memory than it needs to but I just want to know why my test passes. The other difference is that I'm using SWFUpload which is not used with the test. Could this be the cause?

    Read the article

  • VMWare workstation: from command line, how to start a VM in service mode (run in background)?

    - by GenEric35
    Hi, I have tried the vmrun and vmware.exe executables, but both of them start the vmware GUI when starting the VM. What I want to do is start the VM without starting the VMWare GUI. The reason I am doing this is after a few hours of idle, the guest OS becomes sluggish. It has lots of RAM but the only way I found to keep it's responsiveness optimal is to shutdown(dumps the memory) and the start; a restart of the guest OS doesnt dump the memory so I need to be able to do a stop of the VM, and then a start. So far the command I use are: C:\Program Files (x86)\VMware\VMware Workstationvmrun stop F:\VirtualMachines\R2\R2.vmx C:\Program Files (x86)\VMware\VMware Workstationvmrun start F:\VirtualMachines\R2\R2.vmx But the start command actually starts the VMWare Workstation GUI, which I don't need. I'm looking for a solution to start the VM without the VMWare Wokstation GUI, or a solution to what is causing the VM to become sluggish after a few hours of running idle.

    Read the article

  • core temperature vs CPU temperature

    - by Karl Nicoll
    I have recently installed a new heat sink & fan combination on my Core 2 Quad since my CPU was hitting about 70C under load. This has managed reduce temperatures while running Prime95 to about 54C, which I'm taking as a win (this is ~30 minutes after fitting). I'm a little confused though. The temperature readings given above are for CORE temperatures, but HWMonitor is showing a 5th "CPU" temperature (4 temperatures being the individual core temps) which is showing 21C idle, when idle temperatures for the cores vary between 37C and 42C. I guess there are two questions here: Are my CPU/Core temperatures decent, and is it safe to overclock when these are stock clock temperatures? I gather that the maximum safe operating temperature for a C2Q is ~70C, so which temperature should I measure against, the core temperatures (which are higher), or the CPU temperature reading?

    Read the article

  • systemd initiated uwsgi process shuts down after a while

    - by Calvin Cheng
    So I wrote this simple systemd service script:- [Unit] Description=uwsgi server script [Service] User=web Group=web WorkingDirectory=/var/www/prod/myproject/releases/current ExecStart=/bin/bash -c 'source ~/.bash_profile; workon myproject; uwsgi --ini /var/www/prod/myproject/releases/current/myproject/uwsgi_prod.ini' [Install] WantedBy=multi-user.target which works fine - it starts up and I can see my uwsgi processes in htop. However, it inexplicably shuts down after being idle for 5 minutes. If I start this process manually in bash console by executing, as web user:- source ~/.bash_profile workon myproject uwsgi --ini /var/www/prod/myproject/releases/current/myproject/uwsgi_prod.ini my process does not die after being idle. What could the problem be?

    Read the article

  • How to increase hard drive transfer speed

    - by atif089
    This is my motherboard GA-M61PME-S2 (SATA upto 3.0 Gbps) and This is my Hard Disk Samsung hd502hi Capacity 500 GB Cache 16MB Disks / Heads 1 / 2 Interface SATA 3Gb/s Spindle Speed 5400 RPM Sustained Data Rate OD 100 MB/s Average Seek 8.9 ms Average Latency 5.56 ms Data Transfer Rate 300 MB/sec Weight 470 grams Power: Idle / Seek / R-W / Spin-up 3.9W / 4.8W / 5.1W / ~24W Acoustics (sound power) 2.2 / 2.7 Bel (idle / quiet seek / performance seek) When I copy things from one partition to another they transfer at a maximum of 30 MBps. However the drive supports upto 300 MBps right ? How do I increase the transfer speeds? P.S - Using Windows XP, All partitions are NTFS.

    Read the article

  • Utility to unmap a network drive when the screen saver starts

    - by JimR
    I'm looking for a way to unmap network drives when the screen saver turns on. I have a few users that share an external, encrypted drive (Samba share, not windows) and they have a requirement to disconnect the drive mapping when the local machine is idle. I'd also like it to warn them if there are open files on the mapped drive, if possible. There is also a requirement to force the password to be reentered before mapping when the machine comes back from idle. Is there a Windows setting or utility out there in the wild that meets these requirements?

    Read the article

  • Remote Desktop settings not being applied for user

    - by Anthony K
    We have a number of Win 2003 servers for which we have Remote Desktop enabled. Each user has their profile edited so that they can only connect for 2 hours maximum and have 30 minutes idle time, after which they are disconnected and the session closed. On one server however, the administrator account does not have the maximum session limit working. We can stay connected for days if we want. Originally this was how it was setup, and we later changed the profile for all users so that there are limits. We have rebooted the server a couple of times since, and the Management Console shows the limits. If we are idle for too long we are disconnected. Other users are having all the limits observed. Any suggestions?

    Read the article

  • Manually accessing GMail via IMAP

    - by Jeff Mc
    I'm trying to connect to gmail imap, but I am unable to execute any commands after login. I'm running openssl s_client -connect imap.gmail.com:993 to connect then, * OK Gimap ready for requests from 128.146.221.118 42if6514983iwn.40 . CAPABILITY * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY SASL-IR AUTH=XOAUTH . OK Thats all she wrote! 42if6514983iwn.40 . LOGIN {email removed} {password removed} * CAPABILITY IMAP4rev1 UNSELECT LITERAL+ IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE . OK {email removed} authenticated (Success) . CAPABILITY at which point it simply hangs with the connection open. I'm guessing gmail pushes you off to a node in a cluster after it authenticate me?

    Read the article

  • Semantic is consuming all CPU, causing emacs to hang

    - by Cheeso
    I upgraded to emacs 23.2.1 on Windows 7, not long ago. Since then I've been unable to use Semantic. As soon as I start it, the cpu goes to MAX . (actually, Windows reports it at 50%, but this is a dual core machine, so emacs is effectively consuming 100% of a core). Emacs becomes non-responsive. Is there a particular combination of versions of semantic and emacs I that is unsafe to use together? how would I debug this spin/hang? I've seen other suggestions to change the semantic-idle-scheduler-idle-time, from its default 2 to something very large. I tried that, but got the same results.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >