Search Results

Search found 8166 results on 327 pages for 'thread syncronization'.

Page 50/327 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • Returning values from Swing using invokeAndWait

    - by Joonas Pulakka
    I've been using the following approach to create components and return values from Swing to/from outside the EDT. For instance, the following method could be an extension to JFrame, to create a JPanel and add it to the parent JFrame: public JPanel threadSafeAddPanel() { final JPanel[] jPanel = new JPanel[1]; try { EventQueue.invokeAndWait(new Runnable() { public void run() { jPanel[0] = new JPanel(); add(jPanel[0]); } }); } catch (InterruptedException ex) { } catch (InvocationTargetException ex) { } return jPanel[0]; } The local 1-length array is used to transfer the "result" from inside the Runnable, which is invoked in the EDT. Well, it looks "a bit" hacky, and so my questions: Does this make sense? Is anybody else doing something like this? Is the 1-length array a good way of transferring the result? Is there an easier way to do this?

    Read the article

  • how do i know how many clients are calling my WCF service function

    - by ZhengZhiren
    i am writing a program to test WCF service performance in high concurrency circumstance. On client side, i start many threads to call a WCF service function which returns a long list of data object. On server side, in that function called by my client, i need to know the number of clients calling the function. For doing that, i set a counter variable. In the beginning of the function, i add the counter by 1, but how can i decrease it after the funtion has returned the result? int clientCount=0; public DataObject[] GetData() { Interlocked.Increment(ref clientCount); List<DataObject> result = MockDb.GetData(); return result.ToArray(); Interlocked.Decrement(ref clientCount); //can't run to here... } i have seen a way in c++. Create a new class named counter. In the constructor of the counter class, increase the variable. And decrease it in the destructor. In the function, make a counter object so that its constructor will be called. And after the function returns, its destructor will be called. Like this: class counter { public: counter(){++clientCount; /* not simply like this, need to be atomic*/} ~counter(){--clientCount; /* not simply like this, need to be atomic*/} }; ... myfunction() { counter c; //do something return something; } In c# i think i can do so with the following codes, but not for sure. public class Service1 : IService1 { static int clientCount = 0; private class ClientCounter : IDisposable { public ClientCounter() { Interlocked.Increment(ref clientCount); } public void Dispose() { Interlocked.Decrement(ref clientCount); } } public DataObject[] GetData() { using (ClientCounter counter = new ClientCounter()) { List<DataObject> result = MockDb.GetData(); return result.ToArray(); } } } i write a counter class implement the IDisposable interface. And put my function codes into a using block. But it seems that it doesn't work so good. No matter how many threads i start, the clientCount variable is up to 3. Any advise would be appreciated.

    Read the article

  • What is wrong with locking non-static fields? What is the correct way to lock a particular instance?

    - by smartcaveman
    Why is it considered bad practice to lock non-static fields? And, if I am not locking non-static fields, then how do I lock an instance method without locking the method on all other instances of the same or derived class? I wrote an example to make my question more clear. public abstract class BaseClass { private readonly object NonStaticLockObject = new object(); private static readonly object StaticLockObject = new object(); protected void DoThreadSafeAction<T>(Action<T> action) where T: BaseClass { var derived = this as T; if(derived == null) { throw new Exception(); } lock(NonStaticLockObject) { action(derived); } } } public class DerivedClass :BaseClass { private readonly Queue<object> _queue; public void Enqueue(object obj) { DoThreadSafeAction<DerivedClass>(x=>x._queue.Enqueue(obj)); } } If I make the lock on the StaticLockObject, then the DoThreadSafeAction method will be locked for all instances of all classes that derive from BaseClass and that is not what I want. I want to make sure that no other threads can call a method on a particular instance of an object while it is locked.

    Read the article

  • How to use Multiple Variables for a lock Scope in C#

    - by Gunner
    I have a situation where a block of code should be executed only if two locker objects are free. I was hoping there would be something like: lock(a,b) { // this scope is in critical region } However, there seems to be nothing like that. So does it mean the only way for doing this is: lock(a) { lock(b) { // this scope is in critical region } } Will this even work as expected? Although the code compiles, but I am not sure whether it would achieve what I am expecting it to.

    Read the article

  • DOM Storage and locks

    - by user535759
    Since DOM storage and its equivalencies persist in between tabs and windows, I've thought about using it for message passing. The problem is that fetch and store are different operations, and therefore not atomic. I have models that rely on UUID generation, conflict resolutions, and beaconing to do the small subset of what I need to do, but my real question is this: Since the local storage is a shared memory resource, what are the locking mechanisms available for mutual access?

    Read the article

  • What's a good algorithm for searching arrays N and M, in order to find elements in N that also exist

    - by GenTiradentes
    I have two arrays, N and M. they are both arbitrarily sized, though N is usually smaller than M. I want to find out what elements in N also exist in M, in the fastest way possible. To give you an example of one possible instance of the program, N is an array 12 units in size, and M is an array 1,000 units in size. I want to find which elements in N also exist in M. (There may not be any matches.) The more parallel the solution, the better. I used to use a hash map for this, but it's not quite as efficient as I'd like it to be. Typing this out, I just thought of running a binary search of M on sizeof(N) independent threads. (Using CUDA) I'll see how this works, though other suggestions are welcome.

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

  • Java: Allowing the child thread to kill itself on InterruptedException?

    - by Zombies
    I am using a ThreadPool via ExecutorService. By calling shutDownNow() it interrupts all running threads in the pool. When this happens I want these threads to give up their resources (socket and db connections) and simply die, but without continuing to run anymore logic, eg: inserting anything into the DB. What is the simplest way to achieve this? Bellow is some sample code: public void threadTest() { Thread t = new Thread(new Runnable() { public void run() { try { Thread.sleep(999999); } catch (InterruptedException e) { //invoke thread suicide logic here } } }); t.start(); t.interrupt(); try { Thread.sleep(4000); } catch (InterruptedException e) { } }

    Read the article

  • Do really need a count lock on Multi threads with one CPU core?

    - by MrROY
    If i have some code looks like this(Please ignore the syntax, i want to understand it without a specified language): count = 0 def countDown(): count += 1 if __name__ == '__main__': thread1(countDown) thread2(countDown) thread3(countDown) Here i have a CPU with only one core, do i really need a lock to the variable count in case of it could be over-written by other threads. I don't know, but if the language cares a lot, please explain it under Java?C and Python, So many thanks.

    Read the article

  • Why isn't it possible to update an ObservableCollection from a different thread?

    - by MainMa
    In a multi-threaded WPF application, it is not possible to update an ObservableCollection from a thread other than WPF window thread. I know there are workarounds, so my question is not how to avoid the "This type of CollectionView does not support changes to its SourceCollection from a thread different from the Dispatcher thread" exception. My question is, why there is such an exception? Why wasn't it possible to allow collection updates from any thread? Personally, I don't see any reason to block UI update when ObservableCollection is changed from other threads. If two threads (including parallel ones) are accessing the same object, one listening for changes of object properties through events, the other one doing changes, it will always work, at least if locks are used properly. So, what are the reasons?

    Read the article

  • How is executed a SendMessage from a different thread?

    - by Lorenzo
    When we send a message, "if the specified window was created by the calling thread, the window procedure is called immediately as a subroutine". But "if the specified window was created by a different thread, the system switches to that thread and calls the appropriate window procedure. Messages sent between threads are processed only when the receiving thread executes message retrieval code." (taken from MSDN documentation for SendMessage). Now, I don't understand how (or, more appropriately, when) the target windows procedure is called. Of course the target thread will not be preempted (the program counter is not changed). I presume that the call will happen during some wait function (like GetMessage or PeekMessage), it is true? That process is documented in detail somewhere?

    Read the article

  • How can I Submit client side answers (to a question) to the server using JAVA?

    - by mdrafi
    How can I Submit client side computer user's answers(to a multiple choice question) to the server using JAVA I have a centralized server and about 1000 client systems. In these 1000 systems students take multiple choice quiz at at time (in some 2 hrs time). Now i've to send all these answers of these questions to the server in an asynchronous threaded queue when the student answer each question (all 1000 students) Also client have to wait if the server connection is a failure, in this case students should be able to continue taking quiz/exam. When I get the connection these answers in queue should be submitted to the server system. How can I solve this problem? Please suggest/help me in this.

    Read the article

  • VB.net SyncLock Object

    - by Budius
    I always seen on SyncLock examples people using Private Lock1 As New Object ' declaration SyncLock Lock1 ' usage but why? In my specific case I'm locking a Queue to avoid problems on mult-threading Enqueueing and Dequeueing my data. Can I lock the Queue object itself, like this? Private cmdQueue As New Queue(Of QueueItem) ' declaration SyncLock cmdQueue ' usage Any help appreciated. Thanks.

    Read the article

  • ConcurentModificationException in Java HashMap

    - by Bear
    Suppose I have two methods in my classes, writeToMap() and processKey() and both methods are called by multiple threads. writeToMap is a method to write something in hashmap and processKey() is used to do sth based on the keySet of HashMap. Inside processKey, I first copy the originalMap before getting the key set. new HashMap<String, Map<String,String>(originalMap).get("xx").keySet(); But I am still getting ConcurrentModificationException even though I always copy the hashmap. Whats the problem?

    Read the article

  • Why do people run Java GUI's on the Event Queue

    - by asmo
    In Java, to create and show a new JFrame, I simply do this: public static void main(String[] args) { new JFrame().setVisible(true); } However, I have seen many people doing it like this: public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { new JFrame().setVisible(true); } }); } Why? Are there any advantages?

    Read the article

  • Threading and iterating through changing collections

    - by adamjellyit
    In C# (console app) I want to hold a collection of objects. All objects are of same type. I want to iterate through the collection calling a method on each object. And then repeat the process continuously. However during iteration objects can be added or removed from the list. (The objects themselves will not be destroyed .. just removed from the list). Not sure what would happen with a foreach loop .. or other similar method. This has to have been done 1000 times before .. can you recommend a solid approach?

    Read the article

  • specifying ThreadPoolExecutor problem

    - by Sarmun
    Is there any way to create Executor that will have always at least 5 threads, and maximum of 20 threads, and unbounded queue for tasks (meaning no task is rejected) I tried new ThreadPoolExecutor(5, 20, 60L, TimeUnit.SECONDS, queue) with all possibilities that I thought of for queue: new LinkedBlockingQueue() // never runs more than 5 threads new LinkedBlockingQueue(1000000) // runs more than 5 threads, only when there is more than 1000000 tasks waiting new ArrayBlockingQueue(1000000) // runs more than 5 threads, only when there is more than 1000000 tasks waiting new SynchronousQueue() // no tasks can wait, after 20, they are rejected and none worked as wanted.

    Read the article

  • singleton pattern in java- lazy Intialization

    - by flash
    public static MySingleton getInstance() { if (_instance==null) { synchronized (MySingleton.class) { _instance = new MySingleton(); } } return _instance; } 1.is there a flaw with the above implementation of the getInstance method? 2.What is the difference between the two implementations.? public static synchronized MySingleton getInstance() { if (_instance==null) { _instance = new MySingleton(); } return _instance; } I have seen a lot of answers on the singleton pattern in stackoverflow but the question I have posted is to know mainly difference of 'synchronize' at method and block level in this particular case.

    Read the article

  • open file for reading multibale time for reading

    - by alaamh
    I want to use dup2 to read from input file and redirect it to the input of exec function. but my problem is i have three running process all of them have to open same input file but they do different jobs. what your suggest in such case? i don't know if it is possible to use "cat data.txt" to feed the input for the three other process but i don't know the way to do that.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >