Search Results

Search found 5026 results on 202 pages for 'blocked threads'.

Page 174/202 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Delphi - Help calling threaded dll function from another thread

    - by cloudstrif3
    I'm using Delphi 2006 and have a bit of a problem with an application I'm developing. I have a form that creates a thread which calls a function that performs a lengthy operation, lets call it LengthyProcess. Inside the LengthyProcess function we also call several Dll functions which also create threads of their own. The problem that I am having is that if I don't use the Synchronize function of my thread to call LengthyProcess the thread stops responding (the main thread is still responding fine). I don't want to use Synchronize because that means the main thread is waiting for LengthyProcess to finish and therefore defeats the purpose of creating a separate thread. I have tracked the problem down to a function inside the dll that creates a thread and then calls WaitFor, this is all done using TThread by the way. WaitFor checks to see if the CurrentThreadID is equal to the MainThreadID and if it is then it will call CheckSychronization, and all is fine. So if we use Synchronize then the CurrentThreadID will equal the MainThreadID however if we do not use Synchronize then of course CurrentThreadID < MainThreadID, and when this happens WaitFor tells the current thread (the thread I created) to wait for the thread created by the DLL and so CheckSynchronization never gets called and my thread ends up waiting forever for the thread created in the dll. I hope this makes sense, sorry I don't know any better way to explain it. Has anyone else had this issue and knows how to solve it please?

    Read the article

  • Pump Messages During Long Operations + C# (it is urgent)

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? It is very urgent. Thanks

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • SQLite dataypes lengths?

    - by XF
    I'm completely new to SQLite (actually 5 minutes ago), but I do know somewhat the Oracle and MySql backends. The question: I'm trying to know the lengths of each of the datatypes supported by SQLite, such as the differences between a bigint and a smallint. I've searched across the SQLite documentation (only talks about affinity, only matters it?), SO threads, google... and found nothing. My guess: I've just slightly revised the SQL92 specifications, which talk about datatypes and its relations but not about its lengths, which is quite obvious I assume. Yet I've come accross the Oracle and MySql datatypes specs, and the specified lengths are mostly identical for integers at least. Should I assume SQLite is using the same lengths? Aside question: Have I missed something about the SQLite docs? Or have I missed something about SQL in general? Asking this because I can't really understand why the SQLite docs don't specify something as basic as the datatypes lengths. It just doesn't make sense to me! Although I'm sure there is a simple command to discover the lengths.. but why not writing them to the docs? Thank you!

    Read the article

  • gpg error - connection already closed?

    - by OopsForgotMyOtherUserName
    omg... hope someone can help me because I am so lost as to what to try next.... I don't know what is causing the error to happen, and I don't see how to figure it out... Keep going between the pgloader.conf examples and what I have, and I don't understand why I keep getting the 'connection already closed' error? The first few lines of my fr.conf is at the very end... I'd really appreciate / love some guidance here... Been trying to get this thing going all morning, and am even getting stuck just on this part... Running this command at the command line: /usr/bin/pgloader -c /var/mybin/pgconfs/fr.conf Yields this in the pgloader.log (with the process just hanging) more /tmp/pgloader.log 27-03-2010 12:22:53 pgloader INFO Logger initialized 27-03-2010 12:22:53 pgloader INFO Reformat path is ['/usr/share/python-support/pgloader/reformat'] 27-03-2010 12:22:53 pgloader INFO Will consider following sections: 27-03-2010 12:22:53 pgloader INFO fixed 27-03-2010 12:22:54 fixed INFO fixed processing 27-03-2010 12:22:54 pgloader INFO All threads are started, wait for them to terminate 27-03-2010 12:22:57 fixed ERROR connection already closed 27-03-2010 12:22:57 fixed INFO closing current database connection [pgsql] host = localhost port = 5432 base = frdb user = username pass = password [fixed] table = fr format = fixed filename = /var/www/fr.txt ...

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

  • Can I get rid of this read lock?

    - by Pieter
    I have the following helper class (simplified): public static class Cache { private static readonly object _syncRoot = new object(); private static Dictionary<Type, string> _lookup = new Dictionary<Type, string>(); public static void Add(Type type, string value) { lock (_syncRoot) { _lookup.Add(type, value); } } public static string Lookup(Type type) { string result; lock (_syncRoot) { _lookup.TryGetValue(type, out result); } return result; } } Add will be called roughly 10/100 times in the application and Lookup will be called by many threads, many of thousands of times. What I would like is to get rid of the read lock. How do you normally get rid of the read lock in this situation? I have the following ideas: Require that _lookup is stable before the application starts operation. The could be build up from an Attribute. This is done automatically through the static constructor the attribute is assigned to. Requiring the above would require me to go through all types that could have the attribute and calling RuntimeHelpers.RunClassConstructor which is an expensive operation; Move to COW semantics. public static void Add(Type type, string value) { lock (_syncRoot) { var lookup = new Dictionary<Type, string>(_lookup); lookup.Add(type, value); _lookup = lookup; } } (With the lock (_syncRoot) removed in the Lookup method.) The problem with this is that this uses an unnecessary amount of memory (which might not be a problem) and I would probably make _lookup volatile, but I'm not sure how this should be applied. (John Skeets' comment here gives me pause.) Using ReaderWriterLock. I believe this would make things worse since the region being locked is small. Suggestions are very welcome.

    Read the article

  • pthreads: reader/writer locks, upgrading read lock to write lock

    - by ScaryAardvark
    I'm using read/write locks on Linux and I've found that trying to upgrade a read locked object to a write lock deadlocks. i.e. // acquire the read lock in thread 1. pthread_rwlock_rdlock( &lock ); // make a decision to upgrade the lock in threads 1. pthread_rwlock_wrlock( &lock ); // this deadlocks as already hold read lock. I've read the man page and it's quite specific. The calling thread may deadlock if at the time the call is made it holds the read-write lock (whether a read or write lock). What is the best way to upgrade a read lock to a write lock in these circumstances.. I don't want to introduce a race on the variable I'm protecting. Presumably I can create another mutex to encompass the releasing of the read lock and the acquiring of the write lock but then I don't really see the use of read/write locks. I might as well simply use a normal mutex. Thx

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • Java semaphore to syncronize printing to screen

    - by Travis Griswald
    I'm currently stuck on a bit of homework and was wondering if anyone could help - I have to use semaphores in java to syncronize printing letters from 2 threads - one printing "A" and one printing "B". I cannot print out more than 2 of the same character in a row, so output should look like AABABABABABBABABABABAABBAABBABABA At the moment I have 3 semaphores, a binary mutex set to 1, and a counting semaphore, and my thread classes look something like this - public void run() { while (true) { Time.delay(RandomGenerator.integer(0,20)); Semaphores.mutex.down (); System.out.println (produce()); if (printCount > 1) { printCount = 0; Semaphores.mutex.up (); Semaphores.printB.up(); } } } public String produce() { printCount++; return "A"; } public void run() { while (true) { Time.delay(RandomGenerator.integer(0,20)); Semaphores.mutex.down (); System.out.println (produce()); if (printCount > 1) { printCount = 0; Semaphores.mutex.up (); Semaphores.printA.up(); } } } public String produce() { printCount++; return "B"; } Yet whatever I try it either deadlocks, or it seems to be working only printing 2 in a row at most, but always seems to print 3 in a row every now and again! Any help is much appreciated, not looking code or anything just a few pointers if possible :)

    Read the article

  • How do i write tasks? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side. That part is really all you need to watch if any). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • Determine if the current thread has low I/O priority

    - by Magnus Hoff
    I have a background thread that does some I/O-intensive background type work. To please the other threads and processes running, I set the thread priority to "background mode" using SetThreadPriority, like this: SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); However, THREAD_MODE_BACKGROUND_BEGIN is only available in Windows Server 2008 or newer, as well as Windows Vista and newer, but the program needs to work well on Windows Server 2003 and XP as well. So the real code is more like this: if (!SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN)) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST); } The problem with this is that on Windows XP it will totally disrupt the system by using too much I/O. I have a plan for a ugly and shameful way of mitigating this problem, but that depends on me being able to determine if the current thread has low I/O priority or not. Now, I know I can store which thread priority I ended up setting, but the control flow in the program is not really well suited for this. I would rather like to be able to test later whether or not the current thread has low I/O priority -- if it is in "background mode". GetThreadPriority does not seem to give me this information. Is there any way to determine if the current thread has low I/O priority?

    Read the article

  • Thread Safety of C# List<T> for readers

    - by ILIA BROUDNO
    I am planning to create the list once in a static constructor and then have multiple instances of that class read it (and enumerate through it) concurrently without doing any locking. In this article http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx MS describes the issue of thread safety as follows: Public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. A List can support multiple readers concurrently, as long as the collection is not modified. Enumerating through a collection is intrinsically not a thread-safe procedure. In the rare case where an enumeration contends with one or more write accesses, the only way to ensure thread safety is to lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization. The "Enumerating through a collection is intrinsically not a thread-safe procedure." Statement is what worries me. Does this mean that it is thread safe for readers only scenario, but as long as you do not use enumeration? Or is it safe for my scenario?

    Read the article

  • How to interrupt a thread performing a blocking socket connect?

    - by Jason R
    I have some code that spawns a pthread that attempts to maintain a socket connection to a remote host. If the connection is ever lost, it attempts to reconnect using a blocking connect() call on its socket. Since the code runs in a separate thread, I don't really care about the fact that it uses the synchronous socket API. That is, until it comes time for my application to exit. I would like to perform some semblance of an orderly shutdown, so I use thread synchronization primitives to wake up the thread and signal for it to exit, then perform a pthread_join() on the thread to wait for it to complete. This works great, unless the thread is in the middle of a connect() call when I command the shutdown. In that case, I have to wait for the connect to time out, which could be a long time. This makes the application appear to take a long time to shut down. What I would like to do is to interrupt the call to connect() in some way. After the call returns, the thread will notice my exit signal and shut down cleanly. Since connect() is a system call, I thought that I might be able to intentionally interrupt it using a signal (thus making the call return EINTR), but I'm not sure if this is a robust method in a POSIX threads environment. Does anyone have any recommendations on how to do this, either using signals or via some other method? As a note, the connect() call is down in some library code that I cannot modify, so changing to a non-blocking socket is not an option.

    Read the article

  • Impossible to be const-correct when combining data and it's lock?

    - by Graeme
    I've been looking at ways to combine a piece of data which will be accessed by multiple threads alongside the lock provisioned for thread-safety. I think I've got to a point where I don't think its possible to do this whilst maintaining const-correctness. Take the following class for example: template <typename TType, typename TMutex> class basic_lockable_type { public: typedef TMutex lock_type; public: template <typename... TArgs> explicit basic_lockable_type(TArgs&&... args) : TType(std::forward<TArgs...>(args)...) {} TType& data() { return data_; } const TType& data() const { return data_; } void lock() { mutex_.lock(); } void unlock() { mutex_.unlock(); } private: TType data_; mutable TMutex mutex_; }; typedef basic_lockable_type<std::vector<int>, std::mutex> vector_with_lock; In this I try to combine the data and lock, marking mutex_ as mutable. Unfortunately this isn't enough as I see it because when used, vector_with_lock would have to be marked as mutable in order for a read operation to be performed from a const function which isn't entirely correct (data_ should be mutable from a const). void print_values() const { std::lock_guard<vector_with_lock>(values_); for(const int val : values_) { std::cout << val << std::endl; } } vector_with_lock values_; Can anyone see anyway around this such that const-correctness is maintained whilst combining data and lock? Also, have I made any incorrect assumptions here?

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • What does the windbg command "kd" do?

    - by Oskar
    I ran kd by mistake and got some output that inteerested me, a reference to a line of code in my module that I can't see on the call stack of any thread. The lines weren't the beginnning of the method so I don't think the reference is to a function pointer, but possibly the result of an exception being stored in memory??? Of course, that happens to be what I'm looking for... Update: The stack trace of the exception is: 0:000> kb *** Stack trace for last set context - .thread/.cxr resets it ChildEBP RetAddr Args to Child 0174f168 734ea84f 2cb9e950 00000000 2cb9e950 kernel32!LoadTimeZoneInformation+0x2b 0174f1c4 734ead92 00000022 00000001 000685d0 msvbvm60! RUN_INSTMGR::ExecuteInitTerm+0x178 0174f1f8 734ea9ee 00000000 0000002f 2dbc2abc msvbvm60! RUN_INSTMGR::CreateObjInstanceWithParts+0x1e4 0174f278 7350414e 2cb9e96c 00000000 0174f2f0 msvbvm60! RUN_INSTMGR::CreateObjInstance+0x14d 0174f2e4 734fa071 00000000 2cb9e96c 0174f2fc msvbvm60!RcmConstructObjectInstance+0x75 0174f31c 00976ef1 2cb9e950 00591bc0 0174fddc msvbvm60!__vbaNew+0x21 and into our code (create a new Form derived class) the dds output: 0:000> dds esp-0x40 esp+0x100 0174f05c 00000000 0174f060 00000000 0174f064 00000000 0174f068 00000000 0174f06c 00000000 0174f070 00000000 0174f074 00000000 0174f078 00000000 0174f07c 00000000 0174f080 00000000 0174f084 00000000 0174f088 00000000 0174f08c 00000000 0174f090 00000000 0174f094 00000000 0174f098 00000000 0174f09c 007f4f9b ourDll!formDerivedClass::Form_Initialize+0x10b [C:\Buildbox\formDerivedClass.frm @ 1452] etc which seems to indicate that Initialize is being called even though it isn't on the stack trace of either this exception or any of the threads. As suggested, it might all be a mismatch between pdbs and dlls, but it seems a coincidence that we end up in the right classes and methods

    Read the article

  • How do i write task? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • NSTimer as a timeout mechanism

    - by alexantd
    I'm pretty sure this is really simple, and I'm just missing something obvious. I have an app that needs to download data from a web service for display in a UITableView, and I want to display a UIAlertView if the operation takes more than X seconds to complete. So this is what I've got (simplified for brevity): MyViewController.h @interface MyViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { NSTimer *timer; } @property (nonatomic, retain) NSTimer *timer; MyViewController.m @implementation MyViewController @synthesize timer; - (void)viewDidLoad { timer = [NSTimer scheduledTimerWithTimeInterval:20 target:self selector:@selector(initializationTimedOut:) userInfo:nil repeats:NO]; [self doSomethingThatTakesALongTime]; [timer invalidate]; } - (void)doSomethingThatTakesALongTime { sleep(30); // for testing only // web service calls etc. go here } - (void)initializationTimedOut:(NSTimer *)theTimer { // show the alert view } My problem is that I'm expecting the [self doSomethingThatTakesALongTime] call to block while the timer keeps counting, and I'm thinking that if it finishes before the timer is done counting down, it will return control of the thread to viewDidLoad where [timer invalidate] will proceed to cancel the timer. Obviously my understanding of how timers/threads work is flawed here because the way the code is written, the timer never goes off. However, if I remove the [timer invalidate], it does.

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • If a nonblocking recv with MSG_PEEK succeeds, will a subsequent recv without MSG_PEEK also succeed?

    - by Michael Wolf
    Here's a simplified version of some code I'm working on: void stuff(int fd) { int ret1, ret2; char buffer[32]; ret1 = recv(fd, buffer, 32, MSG_PEEK | MSG_DONTWAIT); /* Error handling -- and EAGAIN handling -- would go here. Bail if necessary. Otherwise, keep going. */ /* Can this call to recv fail, setting errno to EAGAIN? */ ret2 = recv(fd, buffer, ret1, 0); } If we assume that the first call to recv succeeds, returning a value between 1 and 32, is it safe to assume that the second call will also succeed? Can ret2 ever be less than ret1? In which cases? (For clarity's sake, assume that there are no other error conditions during the second call to recv: that no signal is delivered, that it won't set ENOMEM, etc. Also assume that no other threads will look at fd. I'm on Linux, but MSG_DONTWAIT is, I believe, the only Linux-specific thing here. Assume that the right fnctl was set previously on other platforms.)

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • Using boost asio for pub/sub style tcp in a game loop

    - by unohoo
    I have been reading through the boost asio documentation for a couple of hours now, and while I think the documentation is really great, I am still left a bit confused on how to implement the system that I need. I have to stream info, from a game engine, to a list of computers over tcp. One snag is that, unlike traditional pub/sub, the computer that does the distribution of info is actually the computer that has to connect to the subscribers as well (instead of the subscribers registering with the publisher). This is done via a config file - a list of ip's/ports along with the data that they each require. The subscribers listen, but do not know the ip of the publisher. (As a side note, I'm quite new to network programming, so maybe I'm missing something .. but it's strange that I do not find much information regarding this style of "inverted" client-server model..) I am looking for suggestions for the implementation of such a system using boost asio. Of course I have to integrate the networking into an already existing engine, so with regards to that: What would be a good way to handle messages being sent to multiple computers every frame? Use async_write, call io_service.run and then reset every frame? Would having io_service.run have its own thread be better? Or should I just use threads and use blocking writes?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >