Search Results

Search found 8166 results on 327 pages for 'thread syncronization'.

Page 53/327 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Tab Sweep: FacesMessage enhancements, Look up thread pool resources, JQuery/JSF integration, Galleria, ...

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Fixing remote GlassFish server errors on NetBeans (Igor Cardoso) • FacesMessage Enhancements (PrimeFaces) • How to create and look up thread pool resource in GlassFish (javahowto) • Jersey 1.12 is released (Jakub Podlesak) • VisualVM problem connecting to monitor Glassfish (Raymond Reid) • JSF 2.0 JQuery-JSF Integration (John Yeary) • JDBC-ODBC Bridge Example (John Yeary) • The Java EE 6 Example - Gracefully dealing with Errors in Galleria - Part 6 (Markus Eisele) • Logout functionality in Java web applications (JavaOnly) • LDAP PASSWORD POLICIES AND JAVAEE (Ricky's Hodgepodge) • Java User Groups Promote Java Education (java.net Editor's Daily Blog) • JavaEE Revisits Design Patterns: Aspects (Interceptor) (Developer Chronicles) • Java EE 6 Hand-on Workshop @ IIUI (Shahzad Badar) • javaee6-crud-example (Arjan Tims) • Sample CRUD application with JSF and RichFaces (Mark van der Tol) • 5 useful methods JSF developers should know (Java Code Geeks) Here are some tweets from this week ... Almost 9000 Parleys views at the #JavaEE6 #Devoxx talk I did with @BertErtman. Not even made available for free yet! #JavaEE6 is hot :-) Sent three proposals for Øredev, about #JavaEE6, #OSGi and a case study about Leren-op-Maat (OSGi in the cloud) together with @m4rr5 [blog] The Java EE 6 #Example - Gracefully dealing with #Errors in #Galleria - Part 6 http://t.co/Drg1EQvf #javaee6 Tomorrow, there is a session about Java EE6 #javaee6 at islamia university #bahawalpur under #pakijug.about 150 students going to attend it.

    Read the article

  • How to get PHP command line to work with PDO?

    - by Sabya
    I want to work with PDO, through PHP command line. It works perfect through the PHP web API, but not through the command line. But when I execute the command: php test.php, it says unknown class PDO. I think it has something to do with the thread-safety difference. Because, when I execute the above command, the following warnings come: - F:\shema\htdocs>php test.php PHP Warning: PHP Startup: soap: Unable to initialize module Module compiled with module API=20060613, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: sockets: Unable to initialize module Module compiled with module API=20060613, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mysql: Unable to initialize module Module compiled with module API=20060613, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: pdo_mysql: Unable to initialize module Module compiled with module API=20060613, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: pdo_pgsql: Unable to initialize module Module compiled with module API=20060613, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=1 These options need to match in Unknown on line 0 PHP Fatal error: Class 'PDO' not found in F:\shema\htdocs\test.php on line 2 PHP version: 5.2.9-2, downloaded from here. OS: Windows Vista If the problem is with the modules, where do I get the thread safe modules for those modules?

    Read the article

  • Can threads safely read variables set by VCL events?

    - by Tom1952
    Is it safe for a thread to READ a variable set by a Delphi VCL event? When a user clicks on a VCL TCheckbox, the main thread sets a boolean to the checkbox's Checked state. CheckboxState := CheckBox1.Checked; At any time, a thread reads that variable if CheckBoxState then ... It doesn't matter if the thread "misses" a change to the boolean, because the thread checks the variable in a loop as it does other things. So it will see the state change eventually... Is this safe? Or do I need special code? Is surrounding the read and write of the variable (in the thread and main thread respectively) with critical code calls necessary and sufficient? As I said, it doesn't matter if the thread gets the "wrong" value, but I keep thinking that there might be a low-level problem if one thread tries to read a variable while the main thread is in the middle of writing it, or vice versa. My question is similar to this one: http://stackoverflow.com/questions/1353096/cross-thread-reading-of-a-variable. (Also related to my previous question: http://stackoverflow.com/questions/2449183/using-entercriticalsection-in-thread-to-update-vcl-label)

    Read the article

  • "pseudo-atomic" operations in C++

    - by dan
    So I'm aware that nothing is atomic in C++. But I'm trying to figure out if there are any "pseudo-atomic" assumptions I can make. The reason is that I want to avoid using mutexes in some simple situations where I only need very weak guarantees. 1) Suppose I have globally defined volatile bool b, which initially I set true. Then I launch a thread which executes a loop while(b) doSomething(); Meanwhile, in another thread, I execute b=true. Can I assume that the first thread will continue to execute? In other words, if b starts out as true, and the first thread checks the value of b at the same time as the second thread assigns b=true, can I assume that the first thread will read the value of b as true? Or is it possible that at some intermediate point of the assignment b=true, the value of b might be read as false? 2) Now suppose that b is initially false. Then the first thread executes bool b1=b; bool b2=b; if(b1 && !b2) bad(); while the second thread executes b=true. Can I assume that bad() never gets called? 3) What about an int or other builtin types: suppose I have volatile int i, which is initially (say) 7, and then I assign i=7. Can I assume that, at any time during this operation, from any thread, the value of i will be equal to 7? 4) I have volatile int i=7, and then I execute i++ from some thread, and all other threads only read the value of i. Can I assume that i never has any value, in any thread, except for either 7 or 8? 5) I have volatile int i, from one thread I execute i=7, and from another I execute i=8. Afterwards, is i guaranteed to be either 7 or 8 (or whatever two values I have chosen to assign)?

    Read the article

  • Thunderbird: how to move mails into correct thread? (mailing lists)

    - by unor
    I'm subscribed to some mailing lists and every day people reply to a wrong mail (or they don't reply at all), so that their mail lands in the wrong (or a new) thread. I set the mail display in "View ? Sort by" to "Threaded". Example: mailing list "Foobar": [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic [Foobar] I'm John Doe Re: [Foobar] I'm John Doe Re: [Foobar] Welcome, John Re: [Foobar] random topic There are two discussions, one about "random topic", one about "John Doe". The subject line changes in the discussion about John Doe, which is fine (no problem here). But the last mail should be pigeonholed in the first thread. Instead it is at the top-level. Now, how can I move that last mail into the correct thread? I tried to drag&drop it at the mail I think it should be a reply to, but this doesn't work. I think theoretically it should be possible by fiddling with the mail headers after receiving the mail, but this doesn't seem to be a comfortable way.

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Java Grade Calculator program for class kepp geting the error "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException." [migrated]

    - by user2880621
    this is my first question I've posted. I've used this site to look at other questions to help with similar problems I've had. After some cursory looking I couldn't find quite the answer I was looking for so I decided to finally succumb and create an account. I am fairly new to java, only a few weeks into my first class. Anyway, my project is to create a program which takes any amount of students and their grades, and then assign them a letter grade. The catch is, however, that it is on a sort of curve and the other grades' letter are dependent on the the highest. Anything equal to or 10 points below the best grade is an a, anything 11-20 points below is a b, and so on. I am to use an array, but I get this error when ran "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException." I will go ahead and post my code down below. Thanks for any advice you may be able to give. package grade.calculator; import java.util.Scanner; /** * * @author nichol57 */ public class GradeCalculator { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter the number of students"); int number = input.nextInt(); double[]grades = new double [number]; for (int J = number; J >=0; J--) { System.out.println("Enter the students' grades"); grades[J] = input.nextDouble(); } double best = grades[0]; for (int J = 1; J < number; J++) { if (grades[J] >= best){ best = grades[J]; } } for (int J = 0;J < number; J++){ if (grades[J] >= best - 10){ System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "A"); } else if (grades[J] >= best - 20){ System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "B"); } else if (grades[J] >= best - 30) { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "C"); } else if (grades[J] >= best - 40) { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "D"); } else { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "F"); } } // end for loop for output }// end main method }

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • Does oneway declaration in Android .aidl guarantee that method will be called in a separate thread?

    - by Dan Menes
    I am designing a framework for a client/server application for Android phones. I am fairly new to both Java and Android (but not new to programming in general, or threaded programming in particular). Sometimes my server and client will be in the same process, and sometimes they will be in different processes, depending on the exact use case. The client and server interfaces look something like the following: IServer.aidl: package com.my.application; interface IServer { /** * Register client callback object */ void registerCallback( in IClient callbackObject ); /** * Do something and report back */ void doSomething( in String what ); . . . } IClient.aidl: package com.my.application; oneway interface IClient { /** * Receive an answer */ void reportBack( in String answer ); . . . } Now here is where it gets interesting. I can foresee use cases where the client calls IServer.doSomething(), which in turn calls IClient.reportBack(), and on the basis of what is reported back, IClient.reportBack() needs to issue another call to IClient.doSomething(). The issue here is that IServer.doSomething() will not, in general, be reentrant. That's OK, as long as IClient.reportBack() is always invoked in a new thread. In that case, I can make sure that the implementation of IServer.doSomething() is always synchronized appropriately so that the call from the new thread blocks until the first call returns. If everything works the way I think it does, then by declaring the IClient interface as oneway, I guarantee this to be the case. At least, I can't think of any way that the call from IServer.doSomething() to IClient.reportBack() can return immediately (what oneway is supposed to ensure), yet IClient.reportBack still be able to reinvoke IServer.doSomething recursively in the same thread. Either a new thread in IServer must be started, or else the old IServer thread can be re-used for the inner call to IServer.doSomething(), but only after the outer call to IServer.doSomething() has returned. So my question is, does everything work the way I think it does? The Android documentation hardly mentions oneway interfaces.

    Read the article

  • PyQt threads and signals - how to properly retrieve values

    - by Cawas
    Using Python 2.5 and PyQt, I couldn't find any question this specific in Python, so sorry if I'm repeating the other Qt referenced questions below, but I couldn't easily understand that C code. I've got two classes, a GUI and a thread, and I'm trying to get return values from the thread. I've used the link in here as base to write my code, which is working just fine. To sum it up and illustrate the question in code here (I don't think this code will run on itself): class MainWindow (QtGui.QWidget): # this is just a reference and not really relevant to the question def __init__ (self, parent = None): QtGui.QWidget.__init__(self, parent) self.thread = Worker() # this does not begin a thread - look at "Worker.run" for mor details self.connect(self.thread, QtCore.SIGNAL('finished()'), self.unfreezeUi) self.connect(self.thread, QtCore.SIGNAL('terminated()'), self.unfreezeUi) self.connect(self.buttonDaemon, QtCore.SIGNAL('clicked()'), self.pressDaemon) # the problem begins below: I'm not using signals, or queue, or whatever, while I believe I should def pressDaemon (self): self.buttonDaemon.setEnabled(False) if self.thread.isDaemonRunning(): self.thread.setDaemonStopSignal(True) self.buttonDaemon.setText('Daemon - converts every %s sec'% args['daemonInterval']) else: self.buttonConvert.setEnabled(False) self.thread.startDaemon() self.buttonDaemon.setText('Stop Daemon') self.buttonDaemon.setEnabled(True) # this whole class is just another reference class Worker (QtCore.QThread): daemonIsRunning = False daemonStopSignal = False daemonCurrentDelay = 0 def isDaemonRunning (self): return self.daemonIsRunning def setDaemonStopSignal (self, bool): self.daemonStopSignal = bool def __init__ (self, parent = None): QtCore.QThread.__init__(self, parent) self.exiting = False self.thread_to_run = None # which def will be running def __del__ (self): self.exiting = True self.thread_to_run = None self.wait() def run (self): if self.thread_to_run != None: self.thread_to_run(mode='continue') def startDaemon (self, mode = 'run'): if mode == 'run': self.thread_to_run = self.startDaemon # I'd love to be able to just pass this as an argument on start() below return self.start() # this will begin the thread # this is where the thread actually begins self.daemonIsRunning = True self.daemonStopSignal = False sleepStep = 0.1 # don't know how to interrupt while sleeping - so the less sleepStep, the faster StopSignal will work # begins the daemon in an "infinite" loop while self.daemonStopSignal == False and not self.exiting: # here, do any kind of daemon service delay = 0 while self.daemonStopSignal == False and not self.exiting and delay < args['daemonInterval']: time.sleep(sleepStep) # delay is actually set by while, but this holds for N second delay += sleepStep # daemon stopped, reseting everything self.daemonIsRunning = False self.emit(QtCore.SIGNAL('terminated')) Tho it's quite big, I hope this is pretty clear. The main point is on def pressDaemon. Specifically all 3 self.thread calls. The last one, self.thread.startDaemon() is just fine, and exactly as the example. I doubt that represents any issue. The problem is being able to set the Daemon Stop Signal and retrieve the value if it's running. I'm not sure that it's possible to set a stop signal on QtCore.QtThread, because I've tried doing the same way and it didn't work. But I'm pretty sure it's not possible to retrieve a return result from the emit. So, there it is. I'm using direct calls to the thread class, and I'm almost positive that's not a good design and will probably fail when running under stress. I read about that queue, but I'm not sure it's the proper solution here, or if I should be using Qt at all, since this is Python. And just maybe there's nothing wrong with the way I'm doing.

    Read the article

  • iPhone: Problems releasing UIViewController in a multithreaded environment

    - by bart-simpson
    Hi! I have a UIViewController and in that controller, i am fetching an image from a URL source. The image is fetched in a separate thread after which the user-interface is updated on the main thread. This controller is displayed as a page in a UIScrollView parent which is implemented to release controllers that are not in view anymore. When the thread finishes fetching content before the UIViewController is released, everything works fine - but when the user scrolls to another page before the thread finishes, the controller is released and the only handle to the controller is owned by the thread making releaseCount of the controller equals to 1. Now, as soon as the thread drains NSAutoreleasePool, the controller gets releases because the releaseCount becomes 0. At this point, my application crashes and i get the following error message: bool _WebTryThreadLock(bool), 0x4d99c60: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... The backtrace reveals that the application crashed on the call to [super dealloc] and it makes total sense because the dealloc function must have been triggered by the thread when the pool was drained. My question is, how i can overcome this error and release the controller without leaking memory? One solution that i tried was to call [self retain] before the pool is drained so that retainCount doesn't fall to zero and then using the following code to release controller in the main thread: [self performSelectorOnMainThread:@selector(autorelease) withObject:nil waitUntilDone:NO]; Unfortunately, this did not work out. Below is the function that is executed on a thread: - (void)thread_fetchContent { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSURL *imgURL = [NSURL URLWithString:@"http://www.domain.com/image.png"]; // UIImage *imgHotspot is declared as private - The image is retained // here and released as soon as it is assigned to UIImageView imgHotspot = [[[UIImage alloc] initWithData: [NSData dataWithContentsOfURL: imgURL]] retain]; if ([self retainCount] == 1) { [self retain]; // increment retain count ~ workaround [pool drain]; // drain pool // this doesn't work - i get the same error [self performSelectorOnMainThread:@selector(autorelease) withObject:nil waitUntilDone:NO]; } else { // show fetched image on the main thread - this works fine! [self performSelectorOnMainThread:@selector(showImage) withObject:nil waitUntilDone:NO]; [pool drain]; } } Please help! Thank you in advance.

    Read the article

  • Swing: How do I run a job from AWT thread, but after a window was layed out?

    - by java.is.for.desktop
    My complete GUI runs inside the AWT thread, because I start the main window using SwingUtilities.invokeAndWait(...). Now I have a JDialog which has just to display a JLabel, which indicates that a certain job is in progress, and close that dialog after the job was finished. The problem is: the label is not displayed. That job seems to be started before JDialog was fully layed-out. When I just let the dialog open without waiting for a job and closing, the label is displayed. The last thing the dialog does in its ctor is setVisible(true). Things such as revalidate(), repaint(), ... don't help either. Even when I start a thread for the monitored job, and wait for it using someThread.join() it doesn't help, because the current thread (which is the AWT thread) is blocked by join, I guess. Replacing JDialog with JFrame doesn't help either. So, is the concept wrong in general? Or can I manage it to do certain job after it is ensured that a JDialog (or JFrame) is fully layed-out? Simplified algorithm of what I'm trying to achieve: Create a subclass of JDialog Ensure that it and its contents are fully layed-out Start a process and wait for it to finish (threaded or not, doesn't matter) Close the dialog I managed to write a reproducible test case: EDIT Problem from an answer is now addressed: This use case does display the label, but it fails to close after the "simulated process", because of dialog's modality. import java.awt.*; import javax.swing.*; public class _DialogTest2 { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { final JLabel jLabel = new JLabel("Please wait..."); @Override public void run() { JFrame myFrame = new JFrame("Main frame"); myFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); myFrame.setSize(750, 500); myFrame.setLocationRelativeTo(null); myFrame.setVisible(true); JDialog d = new JDialog(myFrame, "I'm waiting"); d.setModalityType(Dialog.ModalityType.APPLICATION_MODAL); d.add(jLabel); d.setSize(300, 200); d.setLocationRelativeTo(null); d.setVisible(true); SwingUtilities.invokeLater(new Runnable() { @Override public void run() { try { Thread.sleep(3000); // simulate process jLabel.setText("Done"); } catch (InterruptedException ex) { } } }); d.setVisible(false); d.dispose(); myFrame.setVisible(false); myFrame.dispose(); } }); } }

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • How do I update blackberry UI items from a thread?

    - by Andrei T. Ursan
    public class PlayText extends Thread { private int duration; private String text; private PlayerScreen playerscrn; public PlayText(String text, int duration) { this.duration = duration; this.text = text; this.playerscrn = (PlayerScreen)UiApplication.getUiApplication().getActiveScreen(); } public void run() { synchronized(UiApplication.getEventLock()) { try{ RichTextField text1player = new RichTextField(this.text, Field.NON_FOCUSABLE); playerscrn.add(text1player); playerscrn.invalidate(); Thread.sleep(this.duration); RichTextField text2player = new RichTextField("hahhaha", Field.NON_FOCUSABLE); playerscrn.add(text2player); playerscrn.invalidate(); Thread.sleep(1000); RichTextField text3player = new RichTextField("Done", Field.NON_FOCUSABLE); playerscrn.add(text3player); playerscrn.invalidate(); }catch(Exception e){ System.out.println("I HAVE AN ERROR"); } } } } With the above code I'm trying to create a small text player. Instead to get all the text labels one by one, something like display text1player wait this.duration milliseconds display text2player wait 1000 milliseconds display text3player thread done. The screens waits this.duration + 1000 milliseconds and displays all the labels at once. I tried with a runnable and calling .invokeLater or .invokeAndWait but I still get the same behavior, and even if I get dirty like above using synchronized it still doesn't work. Does anyone know how I can display each label at a time? Thank you!

    Read the article

  • C# - How to change window state of Form, on a different thread?

    - by Dodi300
    Hello. Does anyone know how I can chage the window state of a form, from another thread? This is the code I'm using: private void button4_Click(object sender, EventArgs e) { string pathe = label1.Text; string name = Path.GetFileName(pathe); pathe = pathe.Replace(name, ""); string runpath = label2.Text; Process process; process = new Process(); process.EnableRaisingEvents = true; process.Exited += new System.EventHandler(process_Exited); process.StartInfo.FileName = @runpath; process.StartInfo.WorkingDirectory = @pathe; process.Start(); WindowState = FormWindowState.Minimized; } private void process_Exited(object sender, EventArgs e) { this.WindowState = FormWindowState.Normal; } It's meant to run a program and minimize, then return to the normal state once the program has closed. Although I get this error "Cross-thread operation not valid: Control 'Form1' accessed from a thread other than the thread it was created on." Any idea how to get this to work? Thanks.

    Read the article

  • How do I make my ArrayList Thread-Safe? Another approach to problem in Java?

    - by thechiman
    I have an ArrayList that I want to use to hold RaceCar objects that extend the Thread class as soon as they are finished executing. A class, called Race, handles this ArrayList using a callback method that the RaceCar object calls when it is finished executing. The callback method, addFinisher(RaceCar finisher), adds the RaceCar object to the ArrayList. This is supposed to give the order in which the Threads finish executing. I know that ArrayList isn't synchronized and thus isn't thread-safe. I tried using the Collections.synchronizedCollection(c Collection) method by passing in a new ArrayList and assigning the returned Collection to an ArrayList. However, this gives me a compiler error: Race.java:41: incompatible types found : java.util.Collection required: java.util.ArrayList finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); Here is the relevant code: public class Race implements RaceListener { private Thread[] racers; private ArrayList finishingOrder; //Make an ArrayList to hold RaceCar objects to determine winners finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racers[i] = new RaceCar(laps, inputs[i]); //Add this as a RaceListener to each RaceCar ((RaceCar) racers[i]).addRaceListener(this); } //Implement the one method in the RaceListener interface public void addFinisher(RaceCar finisher) { finishingOrder.add(finisher); } What I need to know is, am I using a correct approach and if not, what should I use to make my code thread-safe? Thanks for the help!

    Read the article

  • Are Thread.stop and friends ever safe in Java?

    - by Stephen C
    The stop(), suspend(), and resume() in java.lang.Thread are deprecated because they are unsafe. The Sun recommended work around is to use Thread.interrupt(), but that approach doesn't work in all cases. For example, if you are call a library method that doesn't explicitly or implicitly check the interrupted flag, you have no choice but to wait for the call to finish. So, I'm wondering if it is possible to characterize situations where it is (provably) safe to call stop() on a Thread. For example, would it be safe to stop() a thread that did nothing but call find(...) or match(...) on a java.util.regex.Matcher? (If there are any Sun engineers reading this ... a definitive answer would be really appreciated.) EDIT: Answers that simply restate the mantra that you should not call stop() because it is deprecated, unsafe, whatever are missing the point of this question. I know that that it is genuinely unsafe in the majority of cases, and that if there is a viable alternative you should always use that instead. This question is about the subset cases where it is safe. Specifically, what is that subset?

    Read the article

  • How to make Exchange 2010 use In-Reply-To and References headers as well as Thread-Index?

    - by Paul Wagland
    The problem that I have is simple. We have recently upgraded from Exchange 2003 to Exchange 2010. Everything went fine, and there are very few complaints. In Exchange 2003, some of our OWA users liked the Threading view, these users love the conversations view, especially with the cross-mailbox threading. The problem is that some of these users are now complaining that old e-mails that they got from systems like bugzilla were migrated across correctly threaded, but the new e-mails are not being correctly threaded. If I look at the mail source using an IMAP client then I can see that all of the mails that are turning up as being threaded have the (Microsoft specific) Thread-Index header, and the mails that are not getting grouped do not have this header. The question is, how can I make the webmail client respect the normal threading? That is, is there a way to Make the OWA client use the standard In-Reply-To and References headers, or Make Exchange generate the Thread-Index headers for Outlook and OWA to use?

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy?

    - by ohho
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy ?

    - by user32616
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy ?

    - by Horace Ho
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • UML Diagrams of Multi-Threaded Applications

    - by PersonalNexus
    For single-threaded applications I like to use class diagrams to get an overview of the architecture of that application. This type of diagram, however, hasn’t been very helpful when trying to understand heavily multi-threaded/concurrent applications, for instance because different instances of a class "live" on different threads (meaning accessing an instance is save only from the one thread it lives on). Consequently, associations between classes don’t necessarily mean that I can call methods on those objects, but instead I have to make that call on the target object's thread. Most literature I have dug up on the topic such as Designing Concurrent, Distributed, and Real-Time Applications with UML by Hassan Gomaa had some nice ideas, such as drawing thread boundaries into object diagrams, but overall seemed a bit too academic and wordy to be really useful. I don’t want to use these diagrams as a high-level view of the problem domain, but rather as a detailed description of my classes/objects, their interactions and the limitations due to thread-boundaries I mentioned above. I would therefore like to know: What types of diagrams have you found to be most helpful in understanding multi-threaded applications? Are there any extensions to classic UML that take into account the peculiarities of multi-threaded applications, e.g. through annotations illustrating that some objects might live in a certain thread while others have no thread-affinity; some fields of an object may be read from any thread, but written to only from one; some methods are synchronous and return a result while others are asynchronous that get requests queued up and return results for instance via a callback on a different thread.

    Read the article

  • How is thread synchronization implemented, at the assembly language level?

    - by Martin
    While I'm familiar with concurrent programming concepts such as mutexes and semaphores, I have never understood how they are implemented at the assembly language level. I imagine there being a set of memory "flags" saying: lock A is held by thread 1 lock B is held by thread 3 lock C is not held by any thread etc But how is access to these flags synchronized between threads? Something like this naive example would only create a race condition: mov edx, [myThreadId] wait: cmp [lock], 0 jne wait mov [lock], edx ; I wanted an exclusive lock but the above ; three instructions are not an atomic operation :(

    Read the article

  • Is a signal sent with kill to a parent thread guaranteed to be processed before the next statement?

    - by Jonathan M Davis
    Okay, so if I'm running in a child thread on linux (using pthreads if that matters), and I run the following command kill(getpid(), someSignal); it will send the given signal to the parent of the current thread. My question: Is it guaranteed that the parent will then immediately get the CPU and process the signal (killing the app if it's a SIGKILL or doing whatever else if it's some other signal) before the statement following kill() is run? Or is it possible - even probable - that whatever command follows kill() will run before the signal is processed by the parent thread?

    Read the article

  • How do you call a generic method on a thread?

    - by cw
    How would I call a method with the below header on a thread? public void ReadObjectAsync<T>(string filename) { // Can't use T in a delegate, moved it to a parameter. ThreadStart ts = delegate() { ReadObjectAcync(filename, typeof(T)); }; Thread th = new Thread(ts); th.IsBackground = true; th.Start(); } private void ReadObjectAcync(string filename, Type t) { // HOW? } public T ReadObject<T>(string filename) { // Deserializes a file to a type. }

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >