Search Results

Search found 8158 results on 327 pages for 'deadlocked thread'.

Page 16/327 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Read -> change -> save. Thread safe.

    - by Pavel Alexeev
    This code should automatically connect players when they enter a game. But the problem is when two users try to connect at the same time - in this case 2nd user can easily overwrite changes made by 1st user ('room_1' variable). How could I make it thread safe? def join(userId): users = memcache.get('room_1') users.append(userId) memcache.set('room_1', users) return users I'm using Google App Engine (python) and going to implement simple game-server for exchanging peers given by Adobe Stratus.

    Read the article

  • Sucky MSTest and the "WaitAll for multiple handles on a STA thread is not supported" Error

    - by Anne Bougie
    If you are doing any multi-threading and are using MSTest, you will probably run across this error. For some reason, MSTest by default runs in STA threading mode. WTF, Microsoft! Why so stuck in the old COM world?  When I run the same test using NUnit, I don't have this problem. Unfortunately, my company has chosen MSTest, so I have a lot of testing problems. NUnit is so much better, IMO. After determining that I wasn't referencing any unmanaged code that would flip the thread into STA, which can also cause this error, the only thing left was the testing suite I was using. I dug around a little and found this obscure setting for the Test Run Config settings file that you can't set using its interface. You have to open it up as a text file and add the following setting:  <ExecutionThread apartmentState="MTA" /> This didn't break any other tests, so I'm not sure why it's not the default, or why there is nothing in the test run configuration app to change this setting. Here is the code I was testing:  public void ProcessTest(ProcessInfo[] infos) {    WaitHandle[] waits = new WaitHandle[infos.Length];    int i = 0;    foreach (ProcessInfo info in infos)    {       AutoResetEvent are = new AutoResetEvent(false);       info.Are = are;       waits[i++] = are;         Processor pr = new Processor();       WaitCallback callback = pr.ProcessTest;       ThreadPool.QueueUserWorkItem(callback, info);    }      WaitHandle.WaitAll(waits); }

    Read the article

  • Can't get past 2542 Threads in Java on 4GB iMac OSX 10.6.3 Snow Leopard (32bit)

    - by fuzzy lollipop
    I am running the following program trying to figure out how to configure my JVM to get the maximum number of threads my machine can support. For those that might not know, Snow Leopard ships with Java 6. I tried starting it with defaults, and the following command lines, I always get the Out of Memory Error at Thread 2542 no matter what the JVM options are set to. java TestThreadStackSizes 100000 java -Xss1024 TestThreadStackSizes 100000 java -Xmx128m -Xss1024 TestThreadStackSizes 100000 java -Xmx2048m -Xss1024 TestThreadStackSizes 100000 java -Xmx2048m -Xms2048m -Xss1024 TestThreadStackSizes 100000 no matter what I pass it, I get the same results, Out of Memory Error at 2542 public class TestThreadStackSizes { public static void main(final String[] args) { Thread.currentThread().setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { public void uncaughtException(final Thread t, final Throwable e) { System.err.println(e.getMessage()); System.exit(1); } }); int numThreads = 1000; if (args.length == 1) { numThreads = Integer.parseInt(args[0]); } for (int i = 0; i < numThreads; i++) { try { Thread t = new Thread(new SleeperThread(i)); t.start(); } catch (final OutOfMemoryError e) { throw new RuntimeException(String.format("Out of Memory Error on Thread %d", i), e); } } } private static class SleeperThread implements Runnable { private final int i; private SleeperThread(final int i) { this.i = i; } public void run() { try { System.out.format("Thread %d about to sleep\n", this.i); Thread.sleep(1000 * 60 * 60); } catch (final InterruptedException e) { throw new RuntimeException(e); } } } } Any ideas on now I can affect these results?

    Read the article

  • Change classloader

    - by Chris
    I'm trying to switch the class loader at runtime: public class Test { public static void main(String[] args) throws Exception { final InjectingClassLoader classLoader = new InjectingClassLoader(); Thread.currentThread().setContextClassLoader(classLoader); Thread thread = new Thread("test") { public void run() { System.out.println("running..."); // approach 1 ClassLoader cl = TestProxy.class.getClassLoader(); try { Class c = classLoader.loadClass("classloader.TestProxy"); Object o = c.newInstance(); c.getMethod("test", new Class[] {}).invoke(o); } catch (Exception e) { e.printStackTrace(); } // approach 2 new TestProxy().test(); }; }; thread.setContextClassLoader(classLoader); thread.start(); } } and: public class TestProxy { public void test() { ClassLoader tcl = Thread.currentThread().getContextClassLoader(); ClassLoader ccl = ClassToLoad.class.getClassLoader(); ClassToLoad classToLoad = new ClassToLoad(); } } (it is not relevant what the InjectingClassLoader is) I'd like to make the result of "approach 1" and "approach 2" exactly same, but it looks like thread.setContextClassLoader(classLoader) does nothing and the "approach 2" always uses the system classloader (can be determined by comparing tcl and ccl variables while debugging). Is it possible to make all classes loaded by new thread use given classloader?

    Read the article

  • Microsoft Enterprise Library Caching Application Block not thread safe?!

    - by AlanR
    Good aftenoon, I created a super simple console app to test out the Enterprise Library Caching Application Block, and the behavior is blaffling. I'm hoping I screwed something that's easy to fix in the setup. Have each item expire after 5 seconds for testing purposes. Basic setup -- "every second pick a number between 0 and 2. if the cache doesn't already have it, put it in there -- otherwise just grab it from the cache. Do this inside a LOCK statement to ensure thread safety. APP.CONFIG: <configuration> <configSections> <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <cachingConfiguration defaultCacheManager="Cache Manager"> <cacheManagers> <add expirationPollFrequencyInSeconds="1" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Cache Manager" /> </cacheManagers> <backingStores> <add encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Null Storage" /> </backingStores> </cachingConfiguration> </configuration> C#: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Caching; using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations; namespace ConsoleApplication1 { class Program { public static ICacheManager cache = CacheFactory.GetCacheManager("Cache Manager"); static void Main(string[] args) { while (true) { System.Threading.Thread.Sleep(1000); // sleep for one second. var key = new Random().Next(3).ToString(); string value; lock (cache) { if (!cache.Contains(key)) { cache.Add(key, key, CacheItemPriority.Normal, null, new SlidingTime(TimeSpan.FromSeconds(5))); } value = (string)cache.GetData(key); } Console.WriteLine("{0} --> '{1}'", key, value); //if (null == value) throw new Exception(); } } } } OUPUT -- How can I prevent the cache to returning nulls? 2 --> '2' 1 --> '1' 2 --> '2' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 0 --> '0' 1 --> '1' 2 --> '' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 2 --> '2' 1 --> '1' Press any key to continue . . . Thanks in advance, -Alan.

    Read the article

  • The application called an interface that was marshalled for a different thread

    - by X-Ray
    i'm writing a delphi app that communicates with excel. one thing i noticed is that if i call the Save method on the Excel workbook object, it can appear to hang because excel has a dialog box open for the user. i'm using the late binding. i'd like for my app to be able to notice when Save takes several seconds and then take some kind of action like show a dialog box telling this is what's happening. i figured this'd be fairly easy. all i'd need to do is create a thread that calls Save and have that thread call Excel's Save routine. if it takes too long, i can take some action. procedure TOfficeConnect.Save; var Thread:TOfficeHangThread; begin // spin off as thread so we can control timeout Thread:=TOfficeSaveThread.Create(m_vExcelWorkbook); if WaitForSingleObject(Thread.Handle, 5 {s} * 1000 {ms/s})=WAIT_TIMEOUT then begin Thread.FreeOnTerminate:=true; raise Exception.Create(_('The Office spreadsheet program seems to be busy.')); end; Thread.Free; end; TOfficeSaveThread = class(TThread) private { Private declarations } m_vExcelWorkbook:variant; protected procedure Execute; override; procedure DoSave; public constructor Create(vExcelWorkbook:variant); end; { TOfficeSaveThread } constructor TOfficeSaveThread.Create(vExcelWorkbook:variant); begin inherited Create(true); m_vExcelWorkbook:=vExcelWorkbook; Resume; end; procedure TOfficeSaveThread.Execute; begin m_vExcelWorkbook.Save; end; i understand this problem happens because the OLE object was created from another thread (absolutely). how can i get around this problem? most likely i'll need to "re-marshall" for this call somehow... any ideas? thank you!

    Read the article

  • Java ThreadPoolExecutor getting stuck while using ArrayBlockingQueue

    - by Ravi Rao
    Hi, I'm working on some application and using ThreadPoolExecutor for handling various tasks. ThreadPoolExecutor is getting stuck after some duration. To simulate this in a simpler environment, I've written a simple code where I'm able to simulate the issue. import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class MyThreadPoolExecutor { private int poolSize = 10; private int maxPoolSize = 50; private long keepAliveTime = 10; private ThreadPoolExecutor threadPool = null; private final ArrayBlockingQueue&lt;Runnable&gt; queue = new ArrayBlockingQueue&lt;Runnable&gt;( 100000); public MyThreadPoolExecutor() { threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue); threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable runnable, ThreadPoolExecutor threadPoolExecutor) { System.out .println(&quot;Execution rejected. Please try restarting the application.&quot;); } }); } public void runTask(Runnable task) { threadPool.execute(task); } public void shutDown() { threadPool.shutdownNow(); } public ThreadPoolExecutor getThreadPool() { return threadPool; } public void setThreadPool(ThreadPoolExecutor threadPool) { this.threadPool = threadPool; } public static void main(String[] args) { MyThreadPoolExecutor mtpe = new MyThreadPoolExecutor(); for (int i = 0; i &lt; 1000; i++) { final int j = i; mtpe.runTask(new Runnable() { @Override public void run() { System.out.println(j); } }); } } } Try executing this code a few times. It normally print outs the number on console and when all threads end, it exists. But at times, it finished all task and then is not getting terminated. The thread dump is as follows: MyThreadPoolExecutor [Java Application] MyThreadPoolExecutor at localhost:2619 (Suspended) Daemon System Thread [Attach Listener] (Suspended) Daemon System Thread [Signal Dispatcher] (Suspended) Daemon System Thread [Finalizer] (Suspended) Object.wait(long) line: not available [native method] ReferenceQueue&lt;T&gt;.remove(long) line: not available ReferenceQueue&lt;T&gt;.remove() line: not available Finalizer$FinalizerThread.run() line: not available Daemon System Thread [Reference Handler] (Suspended) Object.wait(long) line: not available [native method] Reference$Lock(Object).wait() line: 485 Reference$ReferenceHandler.run() line: not available Thread [pool-1-thread-1] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-2] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-3] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-4] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-6] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-8] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-5] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-10] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-9] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-7] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [DestroyJavaVM] (Suspended) C:\Program Files\Java\jre1.6.0_07\bin\javaw.exe (Jun 17, 2010 10:42:33 AM) In my actual application,ThreadPoolExecutor threads go in this state and then it stops responding. Regards, Ravi Rao

    Read the article

  • What is the impact of Thread.Sleep(1) in C#?

    - by Justin Tanner
    In a windows form application what is the impact of calling Thread.Sleep(1) as illustrated in the following code: public Constructor() { Thread thread = new Thread(Task); thread.IsBackground = true; thread.Start(); } private void Task() { while (true) { // do something Thread.Sleep(1); } } Will this thread hog all of the available CPU? What profiling techniques can I use to measure this Thread's CPU usage ( other than task manager )?

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • Sending message from working non-gui thread to the main window

    - by bartek
    I'm using WinApi. Is SendMessage/PostMessage a good, thread safe method of communicating with the main window? Suppose, the working thread is creating a bitmap, that must be displayed on the screen. The working thread allocates a bitmap, sends a message with a pointer to this bitmap and waits until GUI thread processes it (for example using SendMessage). The working thread shares no data with other threads. Am I running into troubles with such design? Are there any other possibilities that do not introduce thread synchronizing, locking etc. ?

    Read the article

  • Thread scheduling C

    - by MRP
    include <pthread.h> include <stdio.h> include <stdlib.h> #define NUM_THREADS 4 #define TCOUNT 5 #define COUNT_LIMIT 13 int done = 0; int count = 0; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,1}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void *inc_count(void *t) { int i; long my_id = (long)t; long run_time = thread_runtime[my_id]; if (my_id==2 && done ==0) { for(i=0; i< 5 ; i++) { if( i==4 ){done =1;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==3 && done==1) { for(i=0; i< 4 ; i++) { if(i == 3 ){ done = 2;} pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } if (my_id==4&& done == 2) { for(i=0; i< 8 ; i++) { pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n",my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n", my_id, count); pthread_mutex_unlock(&count_mutex); } } pthread_exit(NULL); } void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); count += 125; printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } int main (int argc, char *argv[]) { int i, rc; long t1=1, t2=2, t3=3, t4=4; pthread_t threads[4]; pthread_attr_t attr; pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } so this code creates 4 threads. thread 1 keeps track of the count value while the other 3 increment the count value. the run time is the number of times the thread will increment the count value. I have a done value that allows the first thread to increment the count value first until its run time is up.. so its like a First Come First Serve. my question is, is there a better way of implementing this? I have read about SCHED_FIFO or SCHED_RR.. I guess I dont know how to implement them into this code or if it can be.

    Read the article

  • Ruby Multithreading: making one thread wait for a signal from another

    - by Peter
    In Ruby, I want to have two threads running at the same time, and want the background thread to periodically signal the foreground thread. How do I get the foreground thread to block until the background thread says 'go'? I can think of a few ways to do it, but am after the most appropriate, idiomatic Ruby method. In code: loop do # background, thread 1 sleep 3 receive_input # tell foreground input is ready # <-- how do I do this? end and loop do # foreground, thread 2 wait_for_signal_from_background # <-- how do I do this? do_something end

    Read the article

  • Issue with maxWorkerThreads and thread count

    - by Kartik M
    I have created an ASP.NET application which creates threads in an infinite loop. I set maxWorkerThreads to 20 in processModel in machine.config. When I checked the Thread count in perfmon there was around 7000 threads created in worker process. In PageLoad() I have: using System.Threading; ... int count = 0; var threadList = new System.Collections.Generic.List<System.Threading.Thread>(); try { while (true) { Thread newThread = new Thread(ThreadStart(DummyCall), 1024); newThread.Start(); threadList.Add(newThread); count++; } } catch (Exception ex) { Response.Write(count + " : " + ex.ToString()); } Function: void DummyCall() { System.Threading.Thread.Sleep(1000000000); } How do I restrict thread creation in ASP.NET with IIS6/7?

    Read the article

  • dealloc on Background Thread

    - by Mark Brackett
    Is it an error to call dealloc on a UIViewController from a background thread? It seems that UITextView (can?) eventually call _WebTryThreadLock which results in: bool _WebTryThreadLock(bool): Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Background: I have a subclassed NSOperation that takes a selector and a target object to notify. -(id)initWithTarget:(id)target { if (self = [super init]) { _target = [target retain]; } return self; } -(void)dealloc { [_target release]; [super dealloc]; } If the UIViewController has already been dismissed when the NSOperation gets around to running, then the call to release triggers it's dealloc on a background thread.

    Read the article

  • WCF Service with callbacks coming from background thread?

    - by Mark Struzinski
    Here is my situation. I have written a WCF service which calls into one of our vendor's code bases to perform operations, such as Login, Logout, etc. A requirement of this operation is that we have a background thread to receive events as a result of that action. For example, the Login action is sent on the main thread. Then, several events are received back from the vendor service as a result of the login. There can be 1, 2, or several events received. The background thread, which runs on a timer, receives these events and fires an event in the wcf service to notify that a new event has arrived. I have implemented the WCF service in Duplex mode, and planned to use callbacks to notify the UI that events have arrived. Here is my question: How do I send new events from the background thread to the thread which is executing the service? Right now, when I call OperationContext.Current.GetCallbackChannel<IMyCallback>(), the OperationContext is null. Is there a standard pattern to get around this? I am using PerSession as my SessionMode on the ServiceContract. UPDATE: I thought I'd make my exact scenario clearer by demonstrating how I'm receiving events from the vendor code. My library receives each event, determines what the event is, and fires off an event for that particular occurrence. I have another project which is a class library specifically for connecting to the vendor service. I'll post the entire implementation of the service to give a clearer picture: [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerSession )] public class VendorServer:IVendorServer { private IVendorService _vendorService; // This is the reference to my class library public VendorServer() { _vendorServer = new VendorServer(); _vendorServer.AgentManager.AgentLoggedIn += AgentManager_AgentLoggedIn; // This is the eventhandler for the event which arrives from a background thread } public void Login(string userName, string password, string stationId) { _vendorService.Login(userName, password, stationId); // This is a direct call from the main thread to the vendor service to log in } private void AgentManager_AgentLoggedIn(object sender, EventArgs e) { var agentEvent = new AgentEvent { AgentEventType = AgentEventType.Login, EventArgs = e }; } } The AgentEvent object contains the callback as one of its properties, and I was thinking I'd perform the callback like this: agentEvent.Callback = OperationContext.Current.GetCallbackChannel<ICallback>(); How would I pass the OperationContext.Current instance from the main thread into the background thread?

    Read the article

  • How does the event dispatch thread work?

    - by Roman
    With the help of people on stackoverflow I was able to get the following working code of the simples GUI countdown (it just displays a window counting down seconds). My main problem with this code is the invokeLater stuff. As far as I understand the invokeLater send a task to the event dispatching thread (EDT) and then the EDT execute this task whenever it "can" (whatever it means). Is it right? To my understanding the code works like that: In the main method we use invokeLater to show the window (showGUI method). In other words, the code displaying the window will be executed in the EDT. In the main method we also start the counter and the counter (by construction) is executed in another thread (so it is not in the event dispatching thread). Right? The counter is executed in a separate thread and periodically it calls updateGUI. The updateGUI is supposed to update GUI. And GUI is working in the EDT. So, updateGUI should also be executed in the EDT. It is why the code for the updateGUI is inclosed in the invokeLater. Is it right? What is not clear to me is why we call the counter from the EDT. Anyway it is not executed in the EDT. It starts immediately a new thread and the counter is executed there. So, why we cannot call the counter in the main method after the invokeLater block? import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class CountdownNew { static JLabel label; // Method which defines the appearance of the window. public static void showGUI() { JFrame frame = new JFrame("Simple Countdown"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); label = new JLabel("Some Text"); frame.add(label); frame.pack(); frame.setVisible(true); } // Define a new thread in which the countdown is counting down. public static Thread counter = new Thread() { public void run() { for (int i=10; i>0; i=i-1) { updateGUI(i,label); try {Thread.sleep(1000);} catch(InterruptedException e) {}; } } }; // A method which updates GUI (sets a new value of JLabel). private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater( new Runnable() { public void run() { label.setText("You have " + i + " seconds."); } } ); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { showGUI(); counter.start(); } }); } }

    Read the article

  • Facebook API returning wrong unread thread count

    - by houbysoft
    I'm trying to query the thread FQL table to get all unread messages, and also the count of unread items in the thread. This is how I query the table: SELECT thread_id,updated_time,snippet,snippet_author,unread FROM thread WHERE folder_id=0 AND unread!=0 From reading the doc to which I linked above, it seems to me that unread should include the count of unread messages in the thread. However, I just tested the above call and Facebook gives me back a value of unread=1, despite the thread in question having 4 unread items. This is how the thread looks on facebook.com (notice the (4), showing that unread should be 4): This is what the API returns to me, which is wrong (notice the "unread":1): { "data":[ { "name":"messages", "fql_result_set":[ { "thread_id":"BLAH BLAH BLAH", "updated_time":1333317140, "snippet":"BLAH BLAH BLAH", "snippet_author":BLAH, "unread":1 } ] } ] } Am I doing something wrong, or is this a bug?

    Read the article

  • How do I refactor this IEnumerable<T> to be thread-safe?

    - by DayOne
    I am looking at Skeet's AtomicEnumerable but I'm not sure how to integrate it into my current IEnumerable exmaple below (http://msmvps.com/blogs/jon_skeet/archive/2009/10/23/iterating-atomically.aspx) Basically I want to foreach my blahs type in a thread-safe way. thanks public sealed class Blahs : IEnumerable<string> { private readonly IList<string> _data = new List<string>() { "blah1", "blah2", "blah3" }; public IEnumerator<string> GetEnumerator() { return _data.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } }

    Read the article

  • iPhone: One Object, One Thread

    - by GingerBreadMane
    On the iPhone, I would like to do some operations on an image in a separate thread. Rather than dealing with semiphores, locking, etc., I'd like to use the 'One Object, One Thread' method of safely writing this concurrent operation. I'm not sure what is the correct way to copy my object into a new thread so that the object is not accessed in the main thread. Do I use the 'copy' method? If so, do I do this before the thread or inside the thread? ... -(void)someMethod{ UIImage *myImage; [NSThread detachNewThreadSelector:@selector(getRotatedImage:) toTarget:self withObject:myImage]; } -(void)getRotatedImage:(UIImage *)image{ ... ... UIImage *copiedImage = [image copy]; ... ... }

    Read the article

  • How does one implement a truly asynchronous java thread

    - by Ritesh M Nayak
    I have a function that needs to perfom two operations, one which finishes fast and one which takes a long time to run. I want to be able to delegate the long running operation to a thread and I dont care when the thread finishes, but the threads needs to complete. I implemented this as shown below , but, my secondoperation never gets done as the function exits after the start() call. How I can ensure that the function returns but the second operation thread finishes its execution as well and is not dependent on the parent thread ? public void someFunction(String data) { smallOperation() Blah a = new Blah(); Thread th = new Thread(a); th.Start(); } class SecondOperation implements Runnable { public void run(){ // doSomething long running } }

    Read the article

  • How can I synchronize database access between a write-thread and a read-thread?

    - by Runcible
    My program has two threads: Main execution thread that handles user input and queues up database writes A utility thread that wakes up every second and flushes the writes to the database Inside the main thread, I occasionally need to make reads on the database. When this happens, performance is not important, but correctness is. (In a perfect world, I would be reading from a cache, not making a round-trip to the database - but let's put that aside for the sake of discussion.) How do I make sure that the main thread sees a correct / quiescent database? A standard mutex won't work, since I run the risk of having the main thread grab the mutex before the data gets flushed to the database. This would be a big race condition. What I really want is some sort of mutex that lets the main thread of execution proceed only AFTER the mutex has been grabbed and released once. Does such a thing exist? What's the best way to solve this problem?

    Read the article

  • Can the lock function be used to implement thread-safe enumeration?

    - by Daniel
    I'm working on a thread-safe collection that uses Dictionary as a backing store. In C# you can do the following: private IEnumerable<KeyValuePair<K, V>> Enumerate() { if (_synchronize) { lock (_locker) { foreach (var entry in _dict) yield return entry; } } else { foreach (var entry in _dict) yield return entry; } } The only way I've found to do this in F# is using Monitor, e.g.: let enumerate() = if synchronize then seq { System.Threading.Monitor.Enter(locker) try for entry in dict -> entry finally System.Threading.Monitor.Exit(locker) } else seq { for entry in dict -> entry } Can this be done using the lock function? Or, is there a better way to do this in general? I don't think returning a copy of the collection for iteration will work because I need absolute synchronization.

    Read the article

  • Is this a valid, lazy, thread-safe Singleton implementation for C#?

    - by Matthew
    I implemented a Singleton pattern like this: public sealed class MyClass { ... public static MyClass Instance { get { return SingletonHolder.instance; } } ... static class SingletonHolder { public static MyClass instance = new MyClass (); } } From Googling around for C# Singleton implementations, it doesn't seem like this is a common way to do things in C#. I found one similar implementation, but the SingletonHolder class wasn't static, and included an explicit (empty) static constructor. Is this a valid, lazy, thread-safe way to implement the Singleton pattern? Or is there something I'm missing?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >