Search Results

Search found 8567 results on 343 pages for 'thread safety'.

Page 71/343 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • Account Preferences Crashes

    - by Vivek Sundaram
    When I click on System Preferences Accounts, I get a crash [every single time]. Here are a few interesting snippets from the "Problem Report". Any ideas on how to tackle this? Process: System Preferences [607] Path: /Applications/System Preferences.app/Contents/MacOS/System Preferences Identifier: com.apple.systempreferences Version: 7.0 (7.0) Build Info: SystemPrefsApp-1750100~5 Code Type: X86-64 (Native) Parent Process: launchd [184] OS Version: Mac OS X 10.6.5 (10H574) Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000117547860 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: objc_msgSend() selector name: willSelect objc[607]: garbage collection is ON Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x00007fff80fd211c objc_msgSend + 40 1 com.apple.systempreferences 0x0000000100008426 0x100000000 + 33830 2 com.apple.systempreferences 0x0000000100006fb8 0x100000000 + 28600 3 com.apple.Foundation 0x00007fff84ede23c __NSFireDelayedPerform + 404 4 com.apple.CoreFoundation 0x00007fff824acbe8 __CFRunLoopRun + 6488 5 com.apple.CoreFoundation 0x00007fff824aadbf CFRunLoopRunSpecific + 575 6 com.apple.HIToolbox 0x00007fff82ec691a RunCurrentEventLoopInMode + 333 7 com.apple.HIToolbox 0x00007fff82ec671f ReceiveNextEventCommon + 310 8 com.apple.HIToolbox 0x00007fff82ec65d8 BlockUntilNextEventMatchingListInMode + 59 9 com.apple.AppKit 0x00007fff866c0e64 _DPSNextEvent + 718 10 com.apple.AppKit 0x00007fff866c07a9 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 155 11 com.apple.AppKit 0x00007fff8668648b -[NSApplication run] + 395 12 com.apple.AppKit 0x00007fff8667f1a8 NSApplicationMain + 364 13 com.apple.systempreferences 0x0000000100001cf4 0x100000000 + 7412 ... Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000001 rbx: 0x0000000200037840 rcx: 0x0000000200058031 rdx: 0x00007fff5fbfe2c0 ... Binary Images: 0x100000000 - 0x10001eff7 com.apple.systempreferences 7.0 (7.0) <30C04F1A-7711-1359-8A0E-D707B8BF2EB4> /Applications/System Preferences.app/Contents/MacOS/System Preferences 0x100758000 - 0x10076dfff com.apple.frameworks.opendirectoryconfigui 10.6.4 (10.6.4) <4711F2E8-DFA5-4C81-BB2A-B1E39D5B1B91> /System/Library/PrivateFrameworks/OpenDirectoryConfigUI.framework/Versions/A/OpenDirectoryConfigUI ...

    Read the article

  • WebSphere hung threads, how can I track then down?

    - by Puzzled
    We have an application running on WebSphere (unfortunately it is 6.1 which is no longer supported, it has not yet been migrated in production to a later version) which becomes entirely unresponsive because of hung threads. As far as I can tell we entirely exhaust one of the thread pools. I have activated hung thread detection and I get a core/thread dump when hung threads are detected. The server can run for several days without problems but has crashed twice this week. When load the core/thread dump in "IBM Thread and Monitor Dump Analyzer for Java", it tells me that there are a certain number of hung threads (this time it was 2, last time 11) and multiple (usually around 40) threads "waiting on condition" and some running threads. I believe one of the thread pool has around that size (50). Now what I see in there are threads waiting for locks, having locks or in wait. Most of them show a stack track which always ends like this: at java/lang/Object.wait(Native Method) at java/lang/Object.wait(Object.java:231) Now, how can I track this down to either a server configuration problem, application issue, WebSphere problem or something else? How is this supposed to help me track down the problem when almost everything in there refers to IBM code? I cannot ask IBM's help as 6.1 is now an unsupported version of WebSphere and while work has been done to make it work under WebSphere 7 we are not yet ready to switch to it in Production yet.

    Read the article

  • .NET 4 SpinLock

    - by Jon Harrop
    The following test code (F#) is not returning the result I'd expect: let safeCount() = let n = 1000000 let counter = ref 0 let spinlock = ref <| SpinLock(false) let run i0 i1 () = for i=i0 to i1-1 do let locked = ref false try (!spinlock).Enter locked if !locked then counter := !counter + 1 finally if !locked then (!spinlock).Exit() let thread = System.Threading.Thread(run 0 (n/2)) thread.Start() run (n/2) n () thread.Join() !counter I'd expect the SpinLock to mutually exclude the counter and, therefore, for it to return counts of 1,000,000 but, instead, it returns smaller values as if no mutual exclusion is occurring. Any ideas what's wrong?

    Read the article

  • What is the fastest cyclic synchronization in Java (ExecutorService vs. CyclicBarrier vs. X)?

    - by Alex Dunlop
    Which Java synchronization construct is likely to provide the best performance for a concurrent, iterative processing scenario with a fixed number of threads like the one outlined below? After experimenting on my own for a while (using ExecutorService and CyclicBarrier) and being somewhat surprised by the results, I would be grateful for some expert advice and maybe some new ideas. Existing questions here do not seem to focus primarily on performance, hence this new one. Thanks in advance! The core of the app is a simple iterative data processing algorithm, parallelized to the spread the computational load across 8 cores on a Mac Pro, running OS X 10.6 and Java 1.6.0_07. The data to be processed is split into 8 blocks and each block is fed to a Runnable to be executed by one of a fixed number of threads. Parallelizing the algorithm was fairly straightforward, and it functionally works as desired, but its performance is not yet what I think it could be. The app seems to spend a lot of time in system calls synchronizing, so after some profiling I wonder whether I selected the most appropriate synchronization mechanism(s). A key requirement of the algorithm is that it needs to proceed in stages, so the threads need to sync up at the end of each stage. The main thread prepares the work (very low overhead), passes it to the threads, lets them work on it, then proceeds when all threads are done, rearranges the work (again very low overhead) and repeats the cycle. The machine is dedicated to this task, Garbage Collection is minimized by using per-thread pools of pre-allocated items, and the number of threads can be fixed (no incoming requests or the like, just one thread per CPU core). V1 - ExecutorService My first implementation used an ExecutorService with 8 worker threads. The program creates 8 tasks holding the work and then lets them work on it, roughly like this: // create one thread per CPU executorService = Executors.newFixedThreadPool( 8 ); ... // now process data in cycles while( ...) { // package data into 8 work items ... // create one Callable task per work item ... // submit the Callables to the worker threads executorService.invokeAll( taskList ); } This works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as much as the processing algorithm would be expected to allow (some work items will finish faster than others, then idle). However, as the work items become smaller (and this is not really under the program's control), the user CPU load shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.8% 85% 1.30 64k 2.5% 77% 5.6 16k 4% 64% 22.5 4096 8% 56% 86 1024 13% 38% 227 256 17% 19% 420 64 19% 17% 948 16 19% 13% 1626 Legend: - block size = size of the work item (= computational steps) - system = system load, as shown in OS X Activity Monitor (red bar) - user = user load, as shown in OS X Activity Monitor (green bar) - cycles/sec = iterations through the main while loop, more is better The primary area of concern here is the high percentage of time spent in the system, which appears to be driven by thread synchronization calls. As expected, for smaller work items, ExecutorService.invokeAll() will require relatively more effort to sync up the threads versus the amount of work being performed in each thread. But since ExecutorService is more generic than it would need to be for this use case (it can queue tasks for threads if there are more tasks than cores), I though maybe there would be a leaner synchronization construct. V2 - CyclicBarrier The next implementation used a CyclicBarrier to sync up the threads before receiving work and after completing it, roughly as follows: main() { // create the barrier barrier = new CyclicBarrier( 8 + 1 ); // create Runable for thread, tell it about the barrier Runnable task = new WorkerThreadRunnable( barrier ); // start the threads for( int i = 0; i < 8; i++ ) { // create one thread per core new Thread( task ).start(); } while( ... ) { // tell threads about the work ... // N threads + this will call await(), then system proceeds barrier.await(); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; } public void run() { while( true ) { // wait for work barrier.await(); // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as before. However, as the work items become smaller, the load still shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.7% 78% 6.1 16k 5.5% 52% 25 4096 9% 29% 64 1024 11% 15% 117 256 12% 8% 169 64 12% 6.5% 285 16 12% 6% 377 For large work items, synchronization is negligible and the performance is identical to V1. But unexpectedly, the results of the (highly specialized) CyclicBarrier seem MUCH WORSE than those for the (generic) ExecutorService: throughput (cycles/sec) is only about 1/4th of V1. A preliminary conclusion would be that even though this seems to be the advertised ideal use case for CyclicBarrier, it performs much worse than the generic ExecutorService. V3 - Wait/Notify + CyclicBarrier It seemed worth a try to replace the first cyclic barrier await() with a simple wait/notify mechanism: main() { // create the barrier // create Runable for thread, tell it about the barrier // start the threads while( ... ) { // tell threads about the work // for each: workerThreadRunnable.setWorkItem( ... ); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; @NotNull volatile private Callable<Integer> workItem; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; this.workItem = NO_WORK; } final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { synchronized( this ) { workItem = callable; notify(); } } public void run() { while( true ) { // wait for work while( true ) { synchronized( this ) { if( workItem != NO_WORK ) break; try { wait(); } catch( InterruptedException e ) { e.printStackTrace(); } } } // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.4% 80% 6.3 16k 4.6% 60% 30.1 4096 8.6% 41% 98.5 1024 12% 23% 202 256 14% 11.6% 299 64 14% 10.0% 518 16 14.8% 8.7% 679 The throughput for small work items is still much worse than that of the ExecutorService, but about 2x that of the CyclicBarrier. Eliminating one CyclicBarrier eliminates half of the gap. V4 - Busy wait instead of wait/notify Since this app is the primary one running on the system and the cores idle anyway if they're not busy with a work item, why not try a busy wait for work items in each thread, even if that spins the CPU needlessly. The worker thread code changes as follows: class WorkerThreadRunnable implements Runnable { // as before final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { workItem = callable; } public void run() { while( true ) { // busy-wait for work while( true ) { if( workItem != NO_WORK ) break; } // do the work ... // wait for everyone else to finish barrier.await(); } } } Also works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.2% 81% 6.3 16k 4.2% 62% 33 4096 7.5% 40% 107 1024 10.4% 23% 210 256 12.0% 12.0% 310 64 11.9% 10.2% 550 16 12.2% 8.6% 741 For small work items, this increases throughput by a further 10% over the CyclicBarrier + wait/notify variant, which is not insignificant. But it is still much lower-throughput than V1 with the ExecutorService. V5 - ? So what is the best synchronization mechanism for such a (presumably not uncommon) problem? I am weary of writing my own sync mechanism to completely replace ExecutorService (assuming that it is too generic and there has to be something that can still be taken out to make it more efficient). It is not my area of expertise and I'm concerned that I'd spend a lot of time debugging it (since I'm not even sure my wait/notify and busy wait variants are correct) for uncertain gain. Any advice would be greatly appreciated.

    Read the article

  • How to marshall COM object on the server side in visual c++?

    - by dos
    I have a out-of-process COM server with an ATL Simple Object which creates another thread. The new thread will need to make calls to ATL Simple object. Since ATL Simple Object and new thread are created different apartments, ATL Simple Object needs to be marshalled in the new thread, otherwise error 0x8001010e will be generated. How do I marshall COM Object on the server side or Am I missing something? Many thanks.

    Read the article

  • Benefit of using multiple SIMD instruction sets simultaneously

    - by GenTiradentes
    I'm writing a highly parallel application that's multithreaded. I've already got an SSE accelerated thread class written. If I were to write an MMX accelerated thread class, then run both at the same time (one SSE thread and one MMX thread per core) would the performance improve noticeably? I would think that this setup would help hide memory latency, but I'd like to be sure before I start pouring time into it.

    Read the article

  • Django - Threading in views without hanging the server

    - by bobthabuilda
    One of my applications in my Django project require each request/visitor to that instance to have their own thread. This might sound confusing, so I'll describe what I'm looking to accomplish in a case based scenario, with steps: User visits application Thread starts Until the thread finishes, that user's server instance hangs Once the thread completes, a response is delivered to the user Other visitors to the site should not be affected by any other users using the application How can I accomplish something like this? If possible, I'd like to find a lightweight solution.

    Read the article

  • C# threading question

    - by MusiGenesis
    Is there any essential difference between this code: ThreadStart starter = new ThreadStart(SomeMethod); starter.Invoke(); and this? ThreadStart starter = new ThreadStart(SomeMethod); Thread th = new Thread(starter); th.Start(); Or does the first invoke the method on the current thread while the second invokes it on a new thread?

    Read the article

  • How can Swing dialogs even work?

    - by Bart van Heukelom
    If you open a dialog in Swing, for example a JFileChooser, it goes somewhat like this pseudocode: swing event thread { create dialog add listener to dialog close event { returnValue = somethingFromDialog } show dialog (wait until it is closed) return returnValue } My question is: how can this possibly work? As you can see the thread waits to return until the dialog is closed. This means the Swing event thread is blocked. Yet, one can interact with the dialog, which AFAIK requires this thread to run. So how does that work?

    Read the article

  • WPF Dispatcher.BeginInvoke crash on Windows XP

    - by amr-ne
    Hi, I have a WPF application and a worker thread. The worker thread invokes a callback on the UI thread, which opens a new dialog. Works fine on Win7, but on XP it will crash. In worker thread: someWpfDialogInstance.Dispatcher.BeginInvoke(openSomeDialogCallback, null); Does someone knows a solution or a workaround for this problem?

    Read the article

  • junit annotation

    - by lamisse
    I wish to launch the GUI appli 2 times from java test how should we use @annotation in this case? example : @BeforeClass public static void setupOnce() { final Thread thread = new Thread() { public void run() { //launche appli } }; try { thread.start(); when I call this function from test @test public void test(){ setuponce(); } to launch it a second time which annotation should I use? @afterclass? thanks

    Read the article

  • .NET threading question

    - by MusiGenesis
    Is there any essential difference between this code: ThreadStart starter = new ThreadStart(SomeMethod); starter.Invoke(); and this? ThreadStart starter = new ThreadStart(SomeMethod); Thread th = new Thread(starter); th.Start(); Or does the first invoke the method on the current thread while the second invokes it on a new thread?

    Read the article

  • iPhone App is leaking memory; Instruments and Clang cannot find the leak

    - by Norbert
    Hi, i've developed an iPhone program which is kind of an image manipulation program: The user get an UIImagePickerController and selects an image. Then the program does some heavy calculating in a new thread (for responsiveness of the application). The thread has, of course, its own autorelease pool. When calculation is done, the seperated thread signals the main thread that the result can be presented. The app creates a new view controller, pushes it onto the navigation controller. In short: UIImagePickerController new thread (autorelease pool) does some heavy calculation with image data signal to main thread that it's done main thread creates view controller and pushes it onto navigation controller view controller presents image result My program works well, but if I dismiss the navigation controller's top view controller by tapping on the back button and repeat the whole process several times, my app crashes. But only on the device! Instruments cannot find any leaks (except for some minor ones which I don't feel responsible for: thread creation, NSCFString; overall about 10 kB). Even Clang static analyzer tells me that my could seems to be all right. I know that the UIImage class can cache images and objects returned from convenience methods get freed only whet their autorelease pool gets drained. But most of the time I work with CGImageRef and I use UIImage' alloc, init & release methods to free memory as soon as possible. Currently, I don't know how to isolate the problem. How would you approach this problem? Crash Log: Incident Identifier: F4C202C9-1338-48FC-80AD-46248E6C7154 CrashReporter Key: bb6f526d8b9bb680f25ea8e93bb071566ccf1776 OS Version: iPhone OS 3.1.1 (7C145) Date: 2009-09-26 14:18:57 +0200 Free pages: 372 Wired pages: 7754 Purgeable pages: 0 Largest process: _MY_APP_ Processes Name UUID Count resident pages _MY_APP_ <032690e5a9b396058418d183480a9ab3> 17766 (jettisoned) (active) debugserver <ec29691560aa0e2994f82f822181bffd> 107 syslog_relay <21e13fa2b777218bdb93982e23fb65d3> 62 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 afcd <98b45027fbb1350977bf1ca313dee527> 65 mediaserverd <eb8fe997a752407bea573cd3adf568d3> 319 ptpd <b17af9cf6c4ad16a557d6377378e8a1e> 142 syslogd <ec8a5bc4483638539fa1266363dee8b8> 68 BTServer <1bb74831f93b1d07c48fb46cc31c15da> 119 apsd <a639ba83e666cc1d539223923ce59581> 165 notifyd <2ed3a1166da84d8d8868e64d549cae9d> 101 CommCenter <f4239480a623fb1c35fa6c725f75b166> 161 SpringBoard <8919df8091fdfab94d9ae05f513c0ce5> 2681 (active) accessoryd <b66bcf6e77c3ee740c6a017f54226200> 90 configd <41e9d763e71dc0eda19b0afec1daee1d> 275 fairplayd <cdce5393153c3d69d23c05de1d492bd4> 108 mDNSResponder <f3ef7a6b24d4f203ed147f476385ec53> 103 lockdownd <6543492543ad16ff0707a46e512944ff> 297 launchd <73ce695fee09fc37dd70b1378af1c818> 71 **End**

    Read the article

  • Threading questions

    - by JK
    If I spawn a secondary thread and the threaded method calls other methods, are those methods run in the secondary thread or the main thread? Is there a way to determine on which thread a specified piece of code is being run?

    Read the article

  • How to do the processing and keep GUI refreshed using databinding?

    - by macias
    History of the problem This is continuation of my previous question How to start a thread to keep GUI refreshed? but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question. The problem Two pieces: CPU hungry heavy-weight processing as a library (back-end) WPF GUI with databinding which serves as monitor for the processing (front-end) Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data. The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation. Example This is simplified example, but it shows the problem. XAML part: <StackPanel Orientation="Vertical"> <Button Click="Button_Click">Start</Button> <TextBlock Text="{Binding Path=Counter}"/> </StackPanel> C# part (please NOTE this is one piece code, but there are two sections of it): public partial class MainWindow : Window,INotifyPropertyChanged { // GUI part public MainWindow() { InitializeComponent(); DataContext = this; } private void Button_Click(object sender, RoutedEventArgs e) { var thread = new Thread(doProcessing); thread.IsBackground = true; thread.Start(); } // this is non-GUI part -- do not mess with GUI here public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string property_name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(property_name)); } long counter; public long Counter { get { return counter; } set { if (counter != value) { counter = value; OnPropertyChanged("Counter"); } } } void doProcessing() { var tmp = 10000.0; for (Counter = 0; Counter < 10000000; ++Counter) { if (Counter % 2 == 0) tmp = Math.Sqrt(tmp); else tmp = Math.Pow(tmp, 2.0); } } } Known workarounds (Please do not repost them as answers) Those two first are based on Jon ideas: pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-) and this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there So... I am all ears for real solutions, not some tricks.

    Read the article

  • How to interpret Objective-C errors?

    - by Greg Maletic
    I'm getting the following error: 2010-05-11 17:46:28.475 MyApp[54112:5e1b] bool _WebTryThreadLock(bool), 0x140faa0: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Is there any way for me to figure out where [54112:5e1b] is in my code, so I can try to narrow down the error? Thanks.

    Read the article

  • Problem while adding a new value to a hashtable when it is enumerated

    - by karthik
    `hi I am doing a simple synchronous socket programming,in which i employed twothreads one for accepting the client and put the socket object into a collection,other thread will loop through the collection and send message to each client through the socket object. the problem is 1.i connect to clients to the server and start send messages 2.now i want to connect a new client,while doing this i cant update the collection and add a new client to my hashtable.it raises an exception "collection modified .Enumeration operation may not execute" how to add a NEW value without having problems in a hashtable. private void Listen() { try { //lblStatus.Text = "Server Started Listening"; while (true) { Socket ReceiveSock = ServerSock.Accept(); //keys.Clear(); ConnectedClients = new ListViewItem(); ConnectedClients.Text = ReceiveSock.RemoteEndPoint.ToString(); ConnectedClients.SubItems.Add("Connected"); ConnectedList.Items.Add(ConnectedClients); ClientTable.Add(ReceiveSock.RemoteEndPoint.ToString(), ReceiveSock); //foreach (System.Collections.DictionaryEntry de in ClientTable) //{ // keys.Add(de.Key.ToString()); //} //ClientTab.Add( //keys.Add( } //lblStatus.Text = "Client Connected Successfully."; } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void btn_receive_Click(object sender, EventArgs e) { Thread receiveThread = new Thread(new ThreadStart(Receive)); receiveThread.IsBackground = true; receiveThread.Start(); } private void Receive() { while (true) { //lblMsg.Text = ""; byte[] Byt = new byte[2048]; //ReceiveSock.Receive(Byt); lblMsg.Text = Encoding.ASCII.GetString(Byt); } } private void btn_Send_Click(object sender, EventArgs e) { Thread SendThread = new Thread(new ThreadStart(SendMsg)); SendThread.IsBackground = true; SendThread.Start(); } private void btnlist_Click(object sender, EventArgs e) { //Thread ListThread = new Thread(new ThreadStart(Configure)); //ListThread.IsBackground = true; //ListThread.Start(); } private void SendMsg() { while (true) { try { foreach (object SockObj in ClientTable.Keys) { byte[] Tosend = new byte[2048]; Socket s = (Socket)ClientTable[SockObj]; Tosend = Encoding.ASCII.GetBytes("FirstValue&" + GenerateRandom.Next(6, 10).ToString()); s.Send(Tosend); //ReceiveSock.Send(Tosend); Thread.Sleep(300); } } catch (Exception ex) { MessageBox.Show(ex.Message); } } }

    Read the article

  • Does add() on LinkedBlockingQueue notify waiting threads?

    - by obvio171
    I have a consumer thread taking elements from a LinkedBlockingQueue, and I make it sleep manually when it's empty. I use peek() to see if the queue empty because I have to do stuff because sending the thread to sleep, and I do that with queue.wait(). So, when I'm in another thread and add()an element to the queue, does that automatically notify the thread that was wait()ing on the queue?

    Read the article

  • java double buffering problem

    - by russell
    Whats wrong with my applet code which does not render double buffering correctly.I am trying and trying.But failed to get a solution.Plz Plz someone tell me whats wrong with my code. import java.applet.* ; import java.awt.* ; import java.awt.event.* ; public class Ball extends Applet implements Runnable { // Initialisierung der Variablen int x_pos = 10; // x - Position des Balles int y_pos = 100; // y - Position des Balles int radius = 20; // Radius des Balles Image buffer=null; //Graphics graphic=null; int w,h; public void init() { Dimension d=getSize(); w=d.width; h=d.height; buffer=createImage(w,h); //graphic=buffer.getGraphics(); setBackground (Color.black); } public void start () { // Schaffen eines neuen Threads, in dem das Spiel l?uft Thread th = new Thread (this); // Starten des Threads th.start (); } public void stop() { } public void destroy() { } public void run () { // Erniedrigen der ThreadPriority um zeichnen zu erleichtern Thread.currentThread().setPriority(Thread.MIN_PRIORITY); // Solange true ist l?uft der Thread weiter while (true) { // Ver?ndern der x- Koordinate repaint(); x_pos++; y_pos++; //x2--; //y2--; // Neuzeichnen des Applets if(x_pos>410) x_pos=20; if(y_pos>410) y_pos=20; try { Thread.sleep (30); } catch (InterruptedException ex) { // do nothing } Thread.currentThread().setPriority(Thread.MAX_PRIORITY); } } public void paint (Graphics g) { Graphics screen=null; screen=g; g=buffer.getGraphics(); g.setColor(Color.red); g.fillOval(x_pos - radius, y_pos - radius, 2 * radius, 2 * radius); g.setColor(Color.green); screen.drawImage(buffer,0,0,this); } public void update(Graphics g) { paint(g); } } what change should i make.When offscreen image is drawn the previous image also remain in screen.How to erase the previous image from the screen??

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • Ruby: Alter class static method in a code block

    - by Phuong Nguy?n
    Given the Thread class with it current method. Now inside a test, I want to do this: def test_alter_current_thread Thread.current = a_stubbed_method # do something that involve the work of Thread.current Thread.current = default_thread_current end Basically, I want to alter the method of a class inside a test method and recover it after that. I know it sound complex for another language, like Java & C# (in Java, only powerful mock framework can do it). But it's ruby and I hope such nasty stuff would be available

    Read the article

  • What are common uses of condition variables in C++?

    - by jasonline
    I'm trying to learn about condition variables. I would like to know what are the common situations where condition variables are used. One example is in a blocking queue, where two threads access the queue - the producer thread pushes an item into the queue, while the consumer thread pops an item from the queue. If the queue is empty, the consumer thread is waiting until a signal is sent by the producer thread. What are other design situations where you need a condition variable to be used?

    Read the article

  • Python cannot go over internet network

    - by user1642826
    I am currently trying to work with python networking and I have reached a bit of a road block. I am not able to network with any computer but localhost, which is kind-of useless with what networking is concerned. I have tried on my local network, from one computer to another, and I have tried over the internet, both fail. The only time I can make it work is if (when running on the server's computer) it's ip is set as 'localhost' or '192.168.2.129' (computers ip). I have spent hours going over opening ports with my isp and have gotten nowhere, so I decided to try this forum. I have my windows firewall down and I have included some pictures of important screen shots. I have no idea what the problem is and this has spanned almost a year of calls to my isp. The computer, modem, and router have all been replaced in that time. Screen shots: import socket import threading import socketserver class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler): def handle(self): data = self.request.recv(1024) cur_thread = threading.current_thread() response = "{}: {}".format(cur_thread.name, data) self.request.sendall(b'worked') class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer): pass def client(ip, port, message): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((ip, port)) try: sock.sendall(message) response = sock.recv(1024) print("Received: {}".format(response)) finally: sock.close() if __name__ == "__main__": # Port 0 means to select an arbitrary unused port HOST, PORT = "192.168.2.129", 9000 server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler) ip, port = server.server_address # Start a thread with the server -- that thread will then start one # more thread for each request server_thread = threading.Thread(target=server.serve_forever) # Exit the server thread when the main thread terminates server_thread.daemon = True server_thread.start() print("Server loop running in thread:", server_thread.name) ip = '12.34.56.789' print(ip, port) client(ip, port, b'Hello World 1') client(ip, port, b'Hello World 2') client(ip, port, b'Hello World 3') server.shutdown() I do not know where the error is occurring. I get this error: Traceback (most recent call last): File "C:\Users\Dr.Frev\Desktop\serverTest.py", line 43, in <module> client(ip, port, b'Hello World 1') File "C:\Users\Dr.Frev\Desktop\serverTest.py", line 18, in client sock.connect((ip, port)) socket.error: [Errno 10061] No connection could be made because the target machine actively refused it Any help will be greatly appreciated. *if this isn't a proper forum for this, could someone direct me to a more appropriate one.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >