Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 55/66 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • does anyone see any issues with this thread pattern?

    - by prmatta
    Here is a simple thread pattern that I use when writing a class that needs just one thread, and needs to a specific task. The usual requirements for such a class are that it should be startable, stopable and restartable. Does anyone see any issues with this pattern that I use? public class MyThread implements Runnable { private boolean _exit = false; private Thread _thread = null; public void start () { if (_thread == null) { _thread = new Thread(this, "MyThread"); _thread.start(); } } public void run () { while (_exit) { //do something } } public void stop () { _exit = true; if (_thread != null) { _thread.interrupt(); _thread = null; } } } I am looking for comments around if I am missing something, or if there is a better way to write this.

    Read the article

  • Windows service thread not updating database until complete

    - by dfarney
    I have a windows service with a FileSystemWatcher. When a new file is dropped in my folder a new thread is created, started, and I begin processing the file. Throughout this process I am making updates to the database (Linq to SQL) to keep track of the file's processing progress. Problem is none of my database updates are reflected throughout the process, just an update after everything has been completed. Any ideas? Note: when doing dev/testing my code was in an aspx page and worked great, but when I put it in a windows service I no longer get the progress updates. Thanks!

    Read the article

  • C++ volatile required when spinning on boost::shared_ptr operator bool()?

    - by JaredC
    I have two threads referencing the same boost::shared_ptr: boost::shared_ptr<Widget> shared; On thread is spinning, waiting for the other thread to reset the boost::shared_ptr: while(shared) boost::thread::yield(); And at some point the other thread will call: shared.reset(); My question is whether or not I need to declare the shared pointer as volatile to prevent the compiler from optimizing the call to shared.operator bool() out of the loop and never detecting the change? I know that if I were simply looping on a variable, waiting for it to reach 0 I would need volatile, but I'm not sure if boost::shared_ptr is implemented in such a way that it is not necessary here.

    Read the article

  • When should ThreadLocal be used instead of Thread.SetData/Thread.GetData?

    - by Jon Ediger
    Prior to .net 4.0, I implemented a solution using named data slots in System.Threading.Thread. Now, in .net 4.0, there is the idea of ThreadLocal. How does ThreadLocal usage compare to named data slots? Does the ThreadLocal value get inherited by children threads? Is the idea that ThreadLocal is a simplified version of using named data slots? An example of some stuff using named data slots follows. Could this be simplified through use of ThreadLocal, and would it retain the same properties as the named data slots? public static void SetSliceName(string slice) { System.Threading.Thread.SetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable), slice); } public static string GetSliceName(bool errorIfNotFound) { var slice = System.Threading.Thread.GetData(System.Threading.Thread.GetNamedDataSlot(SliceVariable)) as string; if (errorIfNotFound && string.IsNullOrEmpty(slice)) {throw new ConfigurationErrorsException("Server slice name not configured.");} return slice; }

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • UITableViewController executes delate functions before network request finishes

    - by user1543132
    I'm having trouble trying to populate a UITableView with the results of a network request. It seems that my code is alright as it works perfectly when my network is speedy, however, when it's not, the function - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath- still executes, which results in a bad access error. I presume that this is because the array that the aforesaid function attempts to utilize has not been populated. This brings me to my question: Is there anyway that I can have the UITableView delegate methods delayed to avoid this? - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"AlbumsCell"; //UITableViewCell *basicCell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath]; AlbumsCell *cell = (AlbumsCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (!cell) { **// Here is where the Thread 1: EXC_BAD_ACCESS (code=2 address=0x8)** cell = [[[AlbumsCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } Album *album = [_albums objectAtIndex:[indexPath row]]; [cell setAlbum:album]; return cell; }

    Read the article

  • How can solve "Cross-thread operation not valid"?

    - by Phsika
    i try to start multi Thread but i can not it returns to me error: Cross-thread operation not valid: 'listBox1' thread was created to control outside access from another thread was. MyCodes: public DataTable dTable; public DataTable dtRowsCount; Thread t1; ThreadStart ts1; void ExcelToSql() { // SelectDataFromExcel(); ts1 = new ThreadStart(SelectDataFromExcel); t1 = new Thread(ts1); t1.Start(); } void SelectDataFromExcel() { string connectionString = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Source\Addresses.xlsx;Extended Properties=""Excel 12.0;HDR=YES;"""; OleDbConnection excelConnection = new OleDbConnection(connectionString); string[] Sheets = new string[] { "Sayfa1"}; excelConnection.Open(); // This code will open excel file. OleDbCommand dbCommand; OleDbDataAdapter dataAdapter; // progressBar1.Minimum = 1; foreach (var sheet in Sheets) { dbCommand = new OleDbCommand("select * From[" + sheet + "$]", excelConnection); //progressBar1.Maximum = CountRowsExcel(sheet).Rows.Count; // progressBar2.Value = i + 1; System.Threading.Thread.Sleep(1000); **listBox1.Items.Add("Tablo ismi: "+sheet.ToUpper()+"Satir Adeti: "+CountRowsExcel(sheet).Rows.Count.ToString()+" ");** dataAdapter = new OleDbDataAdapter(dbCommand); dTable = new DataTable(); dataAdapter.Fill(dTable); dTable.TableName = sheet.ToUpper(); dTable.Dispose(); dataAdapter.Dispose(); dbCommand.Dispose(); ArrangedDataList(dTable); FillSqlTable(dTable, dTable.TableName); } excelConnection.Close(); excelConnection.Dispose(); }

    Read the article

  • Sleeping a thread blocking stdin

    - by Sid
    Hey, I'm running a function which evaluates commands passed in using stdin and another function which runs a bunch of jobs. I need to make the latter function sleep at regular intervals but that seems to be blocking the stdin. Any advice on how to resolve this would be appreciated. The source code for the functions is def runJobs(comps, jobQueue, numRunning, limit, lock): while len(jobQueue) >= 0: print(len(jobQueue)); if len(jobQueue) > 0: comp, tasks = find_computer(comps, 0); #do something time.sleep(5); def manageStdin(): print "Global Stdin Begins Now" for line in fileinput.input(): try: print(eval(line)); except Exception, e: print e; --Thanks

    Read the article

  • Threading and iterating through changing collections

    - by adamjellyit
    In C# (console app) I want to hold a collection of objects. All objects are of same type. I want to iterate through the collection calling a method on each object. And then repeat the process continuously. However during iteration objects can be added or removed from the list. (The objects themselves will not be destroyed .. just removed from the list). Not sure what would happen with a foreach loop .. or other similar method. This has to have been done 1000 times before .. can you recommend a solid approach?

    Read the article

  • Turn based synchronization between threads

    - by Amarus
    I'm trying to find a way to synchronize multiple threads having the following conditions: * There are two types of threads: 1. A single "cyclic" thread executing an infinite loop to do cyclic calculations 2. Multiple short-lived threads not started by the main thread * The cyclic thread has a sleep duration between each cycle/loop iteration * The other threads are allowed execute during the inter-cycle sleep of the cyclic thread: - Any other thread that attempts to execute during an active cycle should be blocked - The cyclic thread will wait until all other threads that are already executing to be finished Here's a basic example of what I was thinking of doing: // Somewhere in the code: ManualResetEvent manualResetEvent = new ManualResetEvent(true); // Allow Externally call CountdownEvent countdownEvent = new CountdownEvent(1); // Can't AddCount a CountdownEvent with CurrentCount = 0 void ExternallyCalled() { manualResetEvent.WaitOne(); // Wait until CyclicCalculations is having its beauty sleep countdownEvent.AddCount(); // Notify CyclicCalculations that it should wait for this method call to finish before starting the next cycle Thread.Sleep(1000); // TODO: Replace with actual method logic countdownEvent.Signal(); // Notify CyclicCalculations that this call is finished } void CyclicCalculations() { while (!stopCyclicCalculations) { manualResetEvent.Reset(); // Block all incoming calls to ExternallyCalled from this point forward countdownEvent.Signal(); // Dirty workaround for the issue with AddCount and CurrentCount = 0 countdownEvent.Wait(); // Wait until all of the already executing calls to ExternallyCalled are finished countdownEvent.Reset(); // Reset the CountdownEvent for next cycle. Thread.Sleep(2000); // TODO: Replace with actual method logic manualResetEvent.Set(); // Unblock all threads executing ExternallyCalled Thread.Sleep(1000); // Inter-cycles delay } } Obviously, this doesn't work. There's no guarantee that there won't be any threads executing ExternallyCalled that are in between manualResetEvent.WaitOne(); and countdownEvent.AddCount(); at the time the main thread gets released by the CountdownEvent. I can't figure out a simple way of doing what I'm after, and almost everything that I've found after a lengthy search is related to producer/consumer synchronization which I can't apply here.

    Read the article

  • Threads syncronization with ThreadPoolExecutor

    - by justme1
    I'm trying to implement some logic when I create main(father) thread witch executes several other threads. Then it waits for some condition which child threads creates. After condition is meet the father executes some more child threads. The problem that when I use wait/notify I have java.lang.IllegalMonitorStateException exception. Here is the code: public class MyExecutor { final static ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(10); final static ExecutorService svc = Executors.newFixedThreadPool(1); static final ThreadPoolExecutor threadPool = new ThreadPoolExecutor(5, 8, 10, TimeUnit.SECONDS, queue); public static void main(String[] args) throws InterruptedException { final MyExecutor me = new MyExecutor(); svc.execute(new Runnable() { public void run() { try { System.out.println("Main Thread"); me.execute(threadPool, 1); System.out.println("Main Thread waiting"); wait(); System.out.println("Main Thread notified"); me.execute(threadPool, 2); Thread.sleep(100); threadPool.shutdown(); threadPool.awaitTermination(20000, TimeUnit.SECONDS); } catch (InterruptedException e) { e.printStackTrace(); } } }); svc.shutdown(); svc.awaitTermination(10000, TimeUnit.SECONDS); System.out.println("Main Thread finished"); } public void execute(ThreadPoolExecutor tpe, final int id) { tpe.execute(new Runnable() { public void run() { try { System.out.println("Child Thread " + id); Thread.sleep(2000); System.out.println("Child Thread " + id + " finished"); notify(); } catch (InterruptedException e) { e.printStackTrace(); } } }); } } When I comment wait and notify line I have the following output: Main Thread Main Thread waiting Main Thread notified Child Thread 1 Child Thread 2 Child Thread 1 finished Child Thread 2 finished Main Thread finished

    Read the article

  • Manually Increasing the Amount of CPU a Java Application Uses

    - by SkylineAddict
    I've just made a program with Eclipse that takes a really long time to execute. It's taking even longer because it's loading my CPU to 25% only (I'm assuming that is because I'm using a quad-core and the program is only using one core). Is there any way to make the program use all 4 cores to max it out? Java is supposed to be natively multi-threaded, so I don't understand why it would only use 25%.

    Read the article

  • Thread too slow. Better way to execute code (Android AndEngine)?

    - by rphello101
    I'm developing a game where the user creates sprites with every touch. I then have a thread run to check to see if those sprites collide with any others. The problem is, if I tap too quickly, I cause a null pointer exception error. I believe it's because I'm tapping faster than my thread is running. This is the thread I have: public class grow implements Runnable{ public grow(Sprite sprite){ } @Override public void run() { float radf, rads; //fill radius/stationary radius float fx=0, fy=0, sx, sy; while(down){ if(spriteC[spriteNum].active){ spriteC[spriteNum].sprite.setScale(spriteC[spriteNum].scale += 0.001); if(spriteC[spriteNum].sprite.collidesWith(ground)||spriteC[spriteNum].sprite.collidesWith(roof)|| spriteC[spriteNum].sprite.collidesWith(left)||spriteC[spriteNum].sprite.collidesWith(right)){ down = false; spriteC[spriteNum].active=false; yourScene.unregisterTouchArea(spriteC[spriteNum].sprite); } fx = spriteC[spriteNum].sprite.getX(); fy = spriteC[spriteNum].sprite.getY(); radf=spriteC[spriteNum].sprite.getHeightScaled()/2; Log.e("F"+Float.toString(fx),Float.toString(fy)); if(spriteNum>0) for(int x=0;x<spriteNum;x++){ rads=spriteC[x].sprite.getHeightScaled()/2; sx = spriteC[x].body.getWorldCenter().x * 32; sy = spriteC[x].body.getWorldCenter().y * 32; Log.e("S"+Float.toString(sx),Float.toString(sy)); Log.e(Float.toString((float) Math.sqrt(Math.pow((fx-sx),2)+Math.pow((fy-sy),2))),Float.toString((radf+rads))); if(Math.sqrt(Math.pow((fx-sx),2)+Math.pow((fy-sy),2))<(radf+rads)){ down = false; spriteC[spriteNum].active=false; yourScene.unregisterTouchArea(spriteC[spriteNum].sprite); Log.e("Collided",Boolean.toString(down)); } } } } spriteC[spriteNum].body = PhysicsFactory.createCircleBody(mPhysicsWorld, spriteC[spriteNum].sprite, BodyType.DynamicBody, FIXTURE_DEF); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(spriteC[spriteNum].sprite, spriteC[spriteNum].body, true, true)); } } Better solution anyone? I know there is something to do with a handler, but I don't exactly know what that is or how to use one.

    Read the article

  • Blackberry screen navigation probelm

    - by dalandroid
    I have a Screen name DownloaderScreen when the screen start it will start download some file and when download is complete it will go forward to next screen autometically. I using the following code. public DownloaderScreen() { super(NO_VERTICAL_SCROLL | NO_HORIZONTAL_SCROLL | USE_ALL_HEIGHT | USE_ALL_WIDTH); this.application = UiApplication.getUiApplication(); HorizontalFieldManager outerBlock = new HorizontalFieldManager(USE_ALL_HEIGHT); VerticalFieldManager innerBlock = new VerticalFieldManager(USE_ALL_WIDTH | FIELD_VCENTER); innerBlock.setPadding(0, 10, 0, 10); outerBlock.setBackground(BackgroundFactory .createBitmapBackground(LangValue.dlBgimg)); outerBlock.add(innerBlock); add(outerBlock); phraseHelper = new PhraseHelper(); final String[][] phraseList = phraseHelper.getDownloadList(); gaugeField = new GaugeField("Downloading ", 0, phraseList.length, 0, GaugeField.PERCENT); innerBlock.add(gaugeField); Thread dlTread = new Thread() { public void run() { startDownload(phraseList); } }; dlTread.start(); } private void startDownload(String[][] phraseList){ if(phraseList.length!=0){ for(int i=0; i < phraseList.length ; i++){// gaugeField.setValue(i); // code for download } } goToNext(); } private void goToNext() { final Screen currentScreen = application.getActiveScreen(); if (UiApplication.isEventDispatchThread()) { application.popScreen(currentScreen); application.pushScreen(new HomeScreen()); } else { application.invokeLater(new Runnable() { public void run() { application.popScreen(currentScreen); application.pushScreen(new HomeScreen()); } }); } } The code is working fine and starts download files and when download is completed it is going forward to next screen. But when there is no file to download phraseList array length is zero, it is not going forward. What is problem in my code?

    Read the article

  • Tomcat thread waiting on and locking the same resource

    - by Adam Matan
    Consider the following Java\Tomcat thread dump: "http-0.0.0.0-4080-4" daemon prio=10 tid=0x0000000019a2b000 nid=0x360e in Object.wait() [0x0000000040b71000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002ab5565fe358> (a org.apache.tomcat.util.net.JIoEndpoint$Worker) at java.lang.Object.wait(Object.java:485) at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:458) - locked <0x00002ab5565fe358> (a org.apache.tomcat.util.net.JIoEndpoint$Worker) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:484) at java.lang.Thread.run(Thread.java:662) Is this a deadlock? It seems that the same resource (0x00002ab5565fe358) is both locked and waited on - what does it mean?

    Read the article

  • Best approach to synchronising properties across threads

    - by user290796
    Hi, I'm looking for some advice on the best approach to synchronising access to properties of an object in C++. The application has an internal cache of objects which have 10 properties. These objects are to be requested in sets which can then have their properties modified and be re-saved. They can be accessed by 2-4 threads at any given time but access is not intense so my options are: Lock the property accessors for each object using a critical section. This means lots of critical sections - one for each object. Return copies of the objects when requested and have an update function which locks a single critical section to update the object properties when appropriate. I think option 2 seems the most efficient but I just want to see if I'm missing a hidden 3rd option which would be more appropriate. Thanks, J

    Read the article

  • Threading and cores

    - by Matt
    If I have X cores on my machine and I start X threads. Let's assume for the sake of argument that each thread is completely separated in terms of the memory, hdd, etc it uses. Is the OS going to know to send each thread to a core or do more time slicing on one core for multiple threads. What the question boils down to, is if I have X cores and my program must do independent calculations, should I start X threads, will they each get piped to a core, or is the presumption that because I have X cores I can start X threads completely wrong? I'm thinking it is. This is with C# --

    Read the article

  • Android thread handler NullPointerException

    - by Realn0whereman
    So this null pointer is confusing me. I believe it is a scope issue. My main activity looks like this: public class App extends Activity { ProgressDialog progressDialog; ProgressThread progressThread; Then inside of the oncreate I do this: ProgressDialog progressDialog = new ProgressDialog(this); progressDialog.setProgressStyle(ProgressDialog.STYLE_SPINNER); progressDialog.setMessage("Fetching Images..."); ProgressThread progressThread = new ProgressThread(handler,mImageIds,mImages); progressThread.start(); progressDialog.show(); THEN inside progressThread which is a separate class I do mHandler.sendMessage(mHandler.obtainMessage()); Now up until this point i believe it behaves as it should. I have my handler hanging out in class scope right underneath my oncreate final Handler handler = new Handler() { public void handleMessage(Message msg){ progressDialog.hide(); progressThread.interrupt(); } }; The program thinks that progressDialog and progressThread are declared, but are null. Why would they be null if I instantiate in my oncreate.

    Read the article

  • lock statement not working when there is a loop inside it?

    - by Ngu Soon Hui
    See this code: public class multiply { public Thread myThread; public int Counter { get; private set; } public string name { get; private set; } public void RunConsolePrint() { lock(this) { RunLockCode("lock"); } } private void RunLockCode(string lockCode) { Console.WriteLine("Now thread "+lockCode+" " + name + " has started"); for (int i = 1; i <= Counter; i++) { Console.WriteLine(lockCode+" "+name + ": count has reached " + i + ": total count is " + Counter); } Console.WriteLine("Thread " + lockCode + " " + name + " has finished"); } public multiply(string pname, int pCounter) { name = pname; Counter = pCounter; myThread = new Thread(new ThreadStart(RunConsolePrint)); } } And this is the test run code: static void Main(string[] args) { int counter = 50; multiply m2 = new multiply("Second", counter); multiply m1 = new multiply("First", counter); m1.myThread.Start(); m2.myThread.Start(); Console.ReadLine(); } I would expect that m2 must execute from start to finish before m1 starts executing, or vice versa, because of the lock statement. But the result I found was the call to lock first and lock second was intermingled together, i.e., something like this Now thread lock First has started Now thread lock Second has started lock First: Count has reached 1: total count is 50 lock First: Count has reached 2: total count is 50 lock Second: Count has reached 1: total count is 50 What did I do wrong?

    Read the article

  • .NET Threading : How to wait for other thread to finish some task

    - by Alex Ilyin
    Assume I have method void SomeMethod(Action callback) This method does some work in background thread and then invokes callback. The question is - how to block current thread until callback is called ? There is an example bool finished = false; SomeMethod(delegate{ finished = true; }); while(!finished) Thread.Sleep(); But I'm sure there should be better way

    Read the article

  • Java Socket - how to catch Exception of BufferedReader.readline()

    - by Hasan Tahsin
    I have a Thread (let's say T1) which reads data from socket: public void run() { while (running) { try { BufferedReader reader = new BufferedReader( new InputStreamReader(socket.getInputStream()) ); String input = reader.readLine(); } catch (IOException e) { e.printStackTrace(); } } } Another Thread (lets say T2) try to finish the program in one of its method. Therefore T2 does the following: T1.running = false; socket.close(); Here is this scenario for which i couldn't find a solution: T1 is active and waiting for some input to read i.e. blocking. context switching T2 is active and sets running to false, closes the socket context switching because T1 was blocking and T2 closed the socket, T1 throws an Exception. What i want is to catch this SocketException. i can't put a try/catch(SocketException) in T1.run(). So how can i catch it in T1's running-method? If it's not possible to catch it in T1's running, then how can i catch it elsewhere? PS: "Another question about the Thread Debugging" Normally when i debug the code step by step, i lose the 'active running line' on a context switch. Let's say i'm in line 20 of T1, context switch happens, let's assume the program continues from the 30.line of T2, but the debugger does not go/show to the 30.line of T2, instead the 'active running line' vanishes. So i lose the control over the code. I use Eclipse for Java and Visual Studio for C#. So what is the best way to track the code while debugging on a context switch ?

    Read the article

  • executorservice to read data from database in chuncks and run process on them

    - by TazMan
    I'm trying to write a process that would read data from a database and upload it onto a cloud datastore. How can I decide the partition strategy of the data? I want to query the table in chunks and process each chunk in 10 threads. Each thread basically will send the data to an individual node on a 10 node cluster on the cloud.. Where in the below multi threading code will the dataquery to extract and send 10 concurrent requests for uploading data to cloud would be? public class Caller { public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 10; i++) { Runnable worker = new DomainCDCProcessor(i); executor.execute(worker); } executor.shutdown(); while (!executor.isTerminated()) { } System.out.println("Finished all threads"); } }

    Read the article

  • How to check if a thread is busy in C#?

    - by Sam
    I have a Windows Forms UI running on a thread, Thread1. I have another thread, Thread2, that gets tons of data via external events that needs to update the Windows UI. (It actually updates multiple UI threads.) I have a third thread, Thread3, that I use as a buffer thread between Thread1 and Thread2 so that Thread2 can continue to update other threads (via the same method). My buffer thread, Thread3, looks like this: public class ThreadBuffer { public ThreadBuffer(frmUI form, CustomArgs e) { form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); } } What I would like to do is for my ThreadBuffer to check whether my form is currently busy doing previous updates. If it is, I'd like for it to wait until it frees up and then invoke the UpdateUI(e). I was thinking about either: a) //PseudoCode while(form==busy) { // Do nothing; } form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); How would I check the form==busy? Also, I am not sure that this is a good approach. b) Create an event in form1 that will notify the ThreadBuffer that it is ready to process. // psuedocode List<CustomArgs> elist = new List<CustomArgs>(); public ThreadBuffer(frmUI form, CustomArgs e) { from.OnFreedUp += from_OnFreedUp(); elist.Add(e); } private form_OnFreedUp() { if (elist.count == 0) return; form.Invoke((MethodInvoker)delegate { form.UpdateUI(elist[0]); }); elist.Remove(elist[0]); } In this case, how would I write an event that will notify that the form is free? and c) an other ideas?

    Read the article

  • Suggestions for lightweight, thread-safe scheduler

    - by nirvanai
    I am trying to write a round-robin scheduler for lightweight threads (fibers). It must scale to handle as many concurrently-scheduled fibers as possible. I also need to be able to schedule fibers from threads other than the one the run loop is on, and preferably unschedule them from arbitrary threads as well (though I could live with only being able to unschedule them from the run loop). My current idea is to have a circular doubly-linked list, where each fiber is a node and the scheduler holds a reference to the current node. This is what I have so far: using Interlocked = System.Threading.Interlocked; public class Thread { internal Future current_fiber; public void RunLoop () { while (true) { var fiber = current_fiber; if (fiber == null) { // block the thread until a fiber is scheduled continue; } if (fiber.Fulfilled) fiber.Unschedule (); else fiber.Resume (); //if (current_fiber == fiber) current_fiber = fiber.next; Interlocked.CompareExchange<Future> (ref current_fiber, fiber.next, fiber); } } } public abstract class Future { public bool Fulfilled { get; protected set; } internal Future previous, next; // this must be thread-safe // it inserts this node before thread.current_fiber // (getting the exact position doesn't matter, as long as the // chosen nodes haven't been unscheduled) public void Schedule (Thread thread) { next = this; // maintain circularity, even if this is the only node previous = this; try_again: var current = Interlocked.CompareExchange<Future> (ref thread.current_fiber, this, null); if (current == null) return; var target = current.previous; while (target == null) { // current was unscheduled; negotiate for new current_fiber var potential = current.next; var actual = Interlocked.CompareExchange<Future> (ref thread.current_fiber, potential, current); current = (actual == current? potential : actual); if (current == null) goto try_again; target = current.previous; } // I would lock "current" and "target" at this point. // How can I do this w/o risk of deadlock? next = current; previous = target; target.next = this; current.previous = this; } // this would ideally be thread-safe public void Unschedule () { var prev = previous; if (prev == null) { // already unscheduled return; } previous = null; if (next == this) { next = null; return; } // Again, I would lock "prev" and "next" here // How can I do this w/o risk of deadlock? prev.next = next; next.previous = prev; } public abstract void Resume (); } As you can see, my sticking point is that I cannot ensure the order of locking, so I can't lock more than one node without risking deadlock. Or can I? I don't want to have a global lock on the Thread object, since the amount of lock contention would be extreme. Plus, I don't especially care about insertion position, so if I lock each node separately then Schedule() could use something like Monitor.TryEnter and just keep walking the list until it finds an unlocked node. Overall, I'm not invested in any particular implementation, as long as it meets the requirements I've mentioned. Any ideas would be greatly appreciated. Thanks! P.S- For the curious, this is for an open source project I'm starting at http://github.com/nirvanai/Cirrus

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >