Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 42/66 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • tiemout for a function that waits indefiinitely (like listen())

    - by Fantastic Fourier
    Hello, I'm not quite sure if it's possible to do what I'm about to ask so I thought I'd ask. I have a multi-threaded program where threads share a memory block to communicate necessary information. One of the information is termination of threads where threads constantly check for this value and when the value is changed, they know it's time for pthread_exit(). One of the threads contains listen() function and it seems to wait indefinitely. This can be problematic if there are nobody who wants to make connection and the thread needs to exit but it can't check the value whether thread needs to terminate or not since it's stuck on listen() and can't move beyond. while(1) { listen(); ... if(value == 1) pthread_exit(NULL); } My logic is something like that if it helps illustrate my point better. What I thought would solve the problem is to allow listen() to wait for a duration of time and if nothing happens, it moves on to next statement. Unfortunately, none of two args of listen() involves time limit. I'm not even sure if I'm going about the right way with multi-threaded programming, I'm not much experienced at all. So is this a good approach? Perhaps there is a better way to go about it? Thanks for any insightful comments.

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Race condition for thread startup

    - by Ozzah
    A similar question was asked here, but the answers generally all seem to relate to the lambda notation. I get a similar result without the lambda so I thought I'd ask for some clarification: Say I have something like this: for (int i = 0; i < 5; i++) (new Thread(new ThreadStart(delegate() { Console.WriteLine("Thread " + i); }))).Start(); One would expect the following output: Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 Now I realise that the threads aren't started in any particular order, so let's just assume that the above lines can come out in any order. But that is not what happens. What instead happens: Thread 3 Thread 4 Thread 4 Thread 4 Thread 4 or something similar, which leads me to believe that rather than passing the value if i, it is passing the reference. (Which is weird, since an int is a value type). Doing something like this: for (int i = 0; i < 5; i++) (new Thread(new ThreadStart(delegate() { int j = i; Console.WriteLine("Thread " + j); }))).Start(); does not help either, even though we have made a copy of i. I am assuming the reason is that it hasn't made a copy of i in time. Doing something like this: for (int i = 0; i < 5; i++) { (new Thread(new ThreadStart(delegate() { Console.WriteLine("Thread " + i); }))).Start(); Thread.Sleep(50); } seems to fix the problem, however it is extremely undesirable as we're wasting 50ms on each iteration, not to mention the fact that if the computer is heavily loaded then maybe 50ms may not be enough. Here is a sample with my current, specific problem: Thread t = new Thread(new ThreadStart(delgate() { threadLogic(param1, param2, param3, param4); })); t.Start(); param1 = param2 = param3 = param4 = null; with: void threadLogic(object param1, object param2, object param3, object param4) { // Do some stuff here... } I want threadLogic() to run in its own thread, however the above code gives a null reference exception. I assume this is because the values are set to null before the thread has had a chance to start. Again, putting a Thread.Sleep(100) works, but it is an awful solution from every aspect. What do you guys recommend for this particular type of race condition?

    Read the article

  • JNI String Corruption

    - by Chris Dennett
    Hi everyone, I'm getting weird string corruption across JNI calls which is causing problems on the the Java side. Every so often, I'll get a corrupted string in the passed array, which sometimes has existing parts of the original non-corrupted string. The C++ code is supposed to set the first index of the array to the address, it's a nasty hack to get around method call limitations. Additionally, the application is multi-threaded. remoteaddress[0]: 10.1.1.2:49153 remoteaddress[0]: 10.1.4.2:49153 remoteaddress[0]: 10.1.6.2:49153 remoteaddress[0]: 10.1.2.2:49153 remoteaddress[0]: 10.1.9.2:49153 remoteaddress[0]: {garbage here} java.lang.NullPointerException at kokuks.KKSAddress.<init>(KKSAddress.java:139) at kokuks.KKSAddress.createAddress(KKSAddress.java:48) at kokuks.KKSSocket._recvFrom(KKSSocket.java:963) at kokuks.scheduler.RecvOperation$1.execute(RecvOperation.java:144) at kokuks.scheduler.RecvOperation$1.execute(RecvOperation.java:1) at kokuks.KKSEvent.run(KKSEvent.java:58) at kokuks.KokuKS.handleJNIEventExpiry(KokuKS.java:872) at kokuks.KokuKS.handleJNIEventExpiry_fjni(KokuKS.java:880) at kokuks.KokuKS.runSimulator_jni(Native Method) at kokuks.KokuKS$1.run(KokuKS.java:773) at java.lang.Thread.run(Thread.java:717) remoteaddress[0]: 10.1.7.2:49153 The null pointer exception comes from trying to use the corrupt string. In C++, the address prints to standard out normally, but doing this reduces the rate of errors, from what I can see. The C++ code (if it helps): /* * Class: kokuks_KKSSocket * Method: recvFrom_jni * Signature: (Ljava/lang/String;[Ljava/lang/String;Ljava/nio/ByteBuffer;IIJ)I */ JNIEXPORT jint JNICALL Java_kokuks_KKSSocket_recvFrom_1jni (JNIEnv *env, jobject obj, jstring sockpath, jobjectArray addrarr, jobject buf, jint position, jint limit, jlong flags) { if (addrarr && env->GetArrayLength(addrarr) > 0) { env->SetObjectArrayElement(addrarr, 0, NULL); } jboolean iscopy; const char* cstr = env->GetStringUTFChars(sockpath, &iscopy); std::string spath = std::string(cstr); env->ReleaseStringUTFChars(sockpath, cstr); // release me! if (KKS_DEBUG) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << std::endl; } ns3::Ptr<ns3::Socket> socket = ns3::Names::Find<ns3::Socket>(spath); if (!socket) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " socket not found for path!!" << std::endl; return -1; // not found } if (!addrarr) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " array to set sender is null" << std::endl; return -1; } jsize arrsize = env->GetArrayLength(addrarr); if (arrsize < 1) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " array too small to set sender!" << std::endl; return -1; } uint8_t* bufaddr = (uint8_t*)env->GetDirectBufferAddress(buf); long bufcap = env->GetDirectBufferCapacity(buf); uint8_t* realbufaddr = bufaddr + position; uint32_t remaining = limit - position; if (KKS_DEBUG) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " bufaddr: " << bufaddr << ", cap: " << bufcap << std::endl; } ns3::Address aaddr; uint32_t mflags = flags; int ret = socket->RecvFrom(realbufaddr, remaining, mflags, aaddr); if (ret > 0) { if (KKS_DEBUG) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " addr: " << aaddr << std::endl; ns3::InetSocketAddress insa = ns3::InetSocketAddress::ConvertFrom(aaddr); std::stringstream ss; insa.GetIpv4().Print(ss); ss << ":" << insa.GetPort() << std::ends; if (KKS_DEBUG) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " addr: " << ss.str() << std::endl; jsize index = 0; const char *cstr = ss.str().c_str(); jstring jaddr = env->NewStringUTF(cstr); if (jaddr == NULL) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " jaddr is null!!" << std::endl; //jaddr = (jstring)env->NewGlobalRef(jaddr); env->SetObjectArrayElement(addrarr, index, jaddr); //if (env->ExceptionOccurred()) { // env->ExceptionDescribe(); //} } jint jret = ret; return jret; } The Java code (if it helps): /** * Pass an array of size 1 into remote address, and this will be set with * the sender of the packet (hax). This emulates C++ references. * * @param remoteaddress * @param buf * @param flags * @return */ public int _recvFrom(final KKSAddress remoteaddress[], ByteBuffer buf, long flags) { if (!kks.isCurrentlyThreadSafe()) throw new RuntimeException( "Not currently thread safe for ns-3 functions!" ); //lock.lock(); try { if (!buf.isDirect()) return -6; // not direct!! final String[] remoteAddrStr = new String[1]; int ret = 0; ret = recvFrom_jni( path.toPortableString(), remoteAddrStr, buf, buf.position(), buf.limit(), flags ); if (ret > 0) { System.out.println("remoteaddress[0]: " + remoteAddrStr[0]); remoteaddress[0] = KKSAddress.createAddress(remoteAddrStr[0]); buf.position(buf.position() + ret); } return ret; } finally { errNo = _getErrNo(); //lock.unlock(); } } public int recvFrom(KKSAddress[] fromaddress, final ByteBuffer bytes, long flags, long timeoutMS) { if (KokuKS.DEBUG_MODE) printMessage("public synchronized int recvFrom(KKSAddress[] fromaddress, final ByteBuffer bytes, long flags, long timeoutMS)"); if (kks.isCurrentlyThreadSafe()) { return _recvFrom(fromaddress, bytes, flags); // avoid event } fromaddress[0] = null; RecvOperation ro = new RecvOperation( kks, this, flags, true, bytes, timeoutMS ); ro.start(); fromaddress[0] = ro.getFrom(); return ro.getRetCode(); }

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • C++ - how does Sleep() and cin work?

    - by quano
    Just curious. How does actually the function Sleep() work (declared in windows.h)? Maybe not just that implementation, but anyone. With that I mean - how is it implemented? How can it make the code "stop" for a specific time? Also curious about how cin and those actually work. What do they do exactly? The only way I know how to "block" something from continuing to run is with a while loop, but considering that that takes a huge amount of processing power in comparison to what's happening when you're invoking methods to read from stdin (just compare a while (true) to a read from stdin), I'm guessing that isn't what they do.

    Read the article

  • How to detect whether an EventWaitHandle is waiting?

    - by AngryHacker
    I have a fairly well multi-threaded winforms app that employs the EventWaitHandle in a number of places to synchronize access. So I have code similar to this: List<int> _revTypes; EventWaitHandle _ewh = new EventWaitHandle(false, EventResetMode.ManualReset); void StartBackgroundTask() { _ewh.Reset(); Thread t = new Thread(new ThreadStart(LoadStuff)); t.Start(); } void LoadStuff() { _revTypes = WebServiceCall.GetRevTypes() // ...bunch of other calls fetching data from all over the place // using the same pattern _ewh.Set(); } List<int> RevTypes { get { _ewh.WaitOne(); return _revTypes; } } Then I just call .RevTypes somewehre from the UI and it will return data to me when LoadStuff has finished executing. All this works perfectly correctly, however RevTypes is just one property - there are actually several dozens of these. And one or several of these properties are holding up the UI from loading in a fast manner. Short of placing benchmark code into each property, is there a way to see which property is holding the UI from loading? Is there a way to see whether the EventWaitHandle is forced to actually wait?

    Read the article

  • Caveats to be aware of when using threading in Python?

    - by knorv
    I'm quite new to threading in Python and have a couple of beginner questions. When starting more than say fifty threads using the Python threading module I start getting MemoryError. The threads themselves are very slim and not very memory hungry, so it seems like it is the overhead of the threading that causes the memory issues. Is there something I can do to increase the memory capacity or otherwise make Python allow for a larger number of threads? What is the maximum number of threads you've been able to run in your Python code using the threading module? Did you do any tricks to achieve that number? Are there any other caveats to be aware of when using the threading module?

    Read the article

  • Silverlight 4 Multithread

    - by DavyMac23
    I'm trying to update my Silverlight 4 UI approx every 1/2 second with new data. I've hooked into a WCF service using net.tcp binding and issuing callbacks from the server. To make sure I get the data from the service as quickly as possible I've started up my proxy on a backround worker inside of my Silverlight App. My question is, how do I get the results from the callback and update the ObservableCollection that is bound to a datagird? I've tried a number of different ways and keep getting the dreaded cross-thread error.

    Read the article

  • Need help understanding .net ThreadPool

    - by Meredith
    I am trying to understand what ThreadPool does, I have this .NET example: class Program { static void Main() { int c = 2; // Use AutoResetEvent for thread management AutoResetEvent[] arr = new AutoResetEvent[50]; for (int i = 0; i < arr.Length; ++i) { arr[i] = new AutoResetEvent(false); } // Set the number of minimum threads ThreadPool.SetMinThreads(c, 4); // Enqueue 50 work items that run the code in this delegate function for (int i = 0; i < arr.Length; i++) { ThreadPool.QueueUserWorkItem(delegate(object o) { Thread.Sleep(100); arr[(int)o].Set(); // Signals completion }, i); } // Wait for all tasks to complete WaitHandle.WaitAll(arr); } } Does this run 50 "tasks", in groups of 2 (int c) until they all finish? Or I am not understanding what it really does.

    Read the article

  • java will this threading setup work or what can i be doing wrong

    - by Erik
    Im a bit unsure and have to get advice. I have the: public class MyApp extends JFrame{ And from there i do; MyServer = new MyServer (this); MyServer.execute(); MyServer is a: public class MyServer extends SwingWorker<String, Object> { MyServer is doing listen_socket.accept() in the doInBackground() and on connection it create a new class Connection implements Runnable { I have the belove DbHelper that are a singleton. It holds an Sqlite connected. Im initiating it in the above MyApp and passing references all the way in to my runnable: class Connection implements Runnable { My question is what will happen if there are two simultaneous read or `write? My thought here was the all methods in the singleton are synchronized and would put all calls in the queue waiting to get a lock on the synchronized method. Will this work or what can i change? public final class DbHelper { private boolean initalized = false; private String HomePath = ""; private File DBFile; private static final String SYSTEM_TABLE = "systemtable"; Connection con = null; private Statement stmt; private static final ContentProviderHelper instance = new ContentProviderHelper (); public static ContentProviderHelper getInstance() { return instance; } private DbHelper () { if (!initalized) { initDB(); initalized = true; } } private void initDB() { DBFile = locateDBFile(); try { Class.forName("org.sqlite.JDBC"); // create a database connection con = DriverManager.getConnection("jdbc:sqlite:J:/workspace/workComputer/user_ptpp"); } catch (SQLException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } } private File locateDBFile() { File f = null; try{ HomePath = System.getProperty("user.dir"); System.out.println("HomePath: " + HomePath); f = new File(HomePath + "/user_ptpp"); if (f.canRead()) return f; else { boolean success = f.createNewFile(); if (success) { System.out.println("File did not exist and was created " + HomePath); // File did not exist and was created } else { System.out.println("File already exists " + HomePath); // File already exists } } } catch (IOException e) { System.out.println("Maybe try a new directory. " + HomePath); //Maybe try a new directory. } return f; } public String getHomePath() { return HomePath; } private synchronized String getDate(){ SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); Date date = new Date(); return dateFormat.format(date); } public synchronized String getSelectedSystemTableColumn( String column) { String query = "select "+ column + " from " + SYSTEM_TABLE ; try { stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); ResultSet rs = stmt.executeQuery(query); while (rs.next()) { String value = rs.getString(column); if(value == null || value == "") return ""; else return value; } } catch (SQLException e ) { e.printStackTrace(); return ""; } finally { } return ""; } }

    Read the article

  • C - How to use both aio_read() and aio_write().

    - by Slav
    I implement game server where I need to both read and write. So I accept incoming connection and start reading from it using aio_read() but when I need to send something, I stop reading using aio_cancel() and then use aio_write(). Within write's callback I resume reading. So, I do read all the time but when I need to send something - I pause reading. It works for ~20% of time - in other case call to aio_cancel() fails with "Operation now in progress" - and I cannot cancel it (even within permanent while cycle). So, my added write operation never happens. How to use these functions well? What did I missed? EDIT: Used under Linux 2.6.35. Ubuntu 10 - 32 bit. Example code: void handle_read(union sigval sigev_value) { /* handle data or disconnection */ } void handle_write(union sigval sigev_value) { /* free writing buffer memory */ } void start() { const int acceptorSocket = socket(AF_INET, SOCK_STREAM, 0); struct sockaddr_in addr; memset(&addr, 0, sizeof(struct sockaddr_in)); addr.sin_family = AF_INET; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_port = htons(port); bind(acceptorSocket, (struct sockaddr*)&addr, sizeof(struct sockaddr_in)); listen(acceptorSocket, SOMAXCONN); struct sockaddr_in address; socklen_t addressLen = sizeof(struct sockaddr_in); for(;;) { const int incomingSocket = accept(acceptorSocket, (struct sockaddr*)&address, &addressLen); if(incomingSocket == -1) { /* handle error ... */} else { //say socket to append outcoming messages at writing: const int currentFlags = fcntl(incomingSocket, F_GETFL, 0); if(currentFlags < 0) { /* handle error ... */ } if(fcntl(incomingSocket, F_SETFL, currentFlags | O_APPEND) == -1) { /* handle another error ... */ } //start reading: struct aiocb* readingAiocb = new struct aiocb; memset(readingAiocb, 0, sizeof(struct aiocb)); readingAiocb->aio_nbytes = MY_SOME_BUFFER_SIZE; readingAiocb->aio_fildes = socketDesc; readingAiocb->aio_buf = mySomeReadBuffer; readingAiocb->aio_sigevent.sigev_notify = SIGEV_THREAD; readingAiocb->aio_sigevent.sigev_value.sival_ptr = (void*)mySomeData; readingAiocb->aio_sigevent.sigev_notify_function = handle_read; if(aio_read(readingAiocb) != 0) { /* handle error ... */ } } } } //called at any time from server side: send(void* data, const size_t dataLength) { //... some thread-safety precautions not needed here ... const int cancellingResult = aio_cancel(socketDesc, readingAiocb); if(cancellingResult != AIO_CANCELED) { //this one happens ~80% of the time - embracing previous call to permanent while cycle does not help: if(cancellingResult == AIO_NOTCANCELED) { puts(strerror(aio_return(readingAiocb))); // "Operation now in progress" /* don't know what to do... */ } } //otherwise it's okay to send: else { aio_write(...); } }

    Read the article

  • POSIX Threads and signal masks

    - by Max
    Is there a way to change the signal mask of a thread from another thread? I am supposed to write a multithreaded C application that doesn't use mutex, semaphores and condition variables, only signals. So it would look like something like this: The main Thread sends SIGUSR1 to its process and and one of the 2 threads (not including the main thread), will respond to the signal and block SIGUSR1 from the sigmask and sleep. Then the main thread sends SIGUSR1 again, the other thread will respond, block SIGUSR1 from its sigmask, unblock SIGUSR1 from the other threads sigmask, so it will respond to SIGUSR1 again. So essentially whenever the main thread sends SIGUSR1 the two other threads swap between each other. Can somebody help?

    Read the article

  • How to prevent the other threads from accessing a method when one thread is accessing a method?

    - by geeta
    I want to search for a string in 10 files and write the matching lines to a single file. I wrote the matching lines from each file to 10 output files(o/p file1,o/p file2...) and then copied those to a single file using 10 threads. But the output single file has mixed output(one line from o/p file1,another line from o/p file 2 etc...) because its accessed simultaneously by many threads. If I wait for all threads to complete and then write the single file it will be much slower. I want the output file to be written by one thread at a time. What should i do? My source code:(only writing to single file method) public void WriteSingle(File output_file,File final_output) throws IOException { synchronized(output_file){ System.out.println("Writing Single file"); FileOutputStream fo = new FileOutputStream(final_output,true); FileChannel fi = fo.getChannel(); FileInputStream fs = new FileInputStream(output_file); FileChannel fc = fs.getChannel(); int maxCount = (64 * 1024 * 1024) - (32 * 1024); long size = fc.size(); long position = 0; while (position < size) { position += fc.transferTo(position, maxCount, fi); } } }

    Read the article

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • How can I kill a Perl system call after a timeout?

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • Why does every thread in my application use a different hibernate session?

    - by Ittai
    Hi, I have a web-application which uses hibernate and for some reason every thread (httprequest or other threads related to queueing) uses a different session. I've implemented a HibernateSessionFactory class which looks like this: public class HibernateSessionFactory { private static final ThreadLocal<Session> threadLocal = new ThreadLocal<Session>(); private static Configuration configuration = new AnnotationConfiguration(); private static org.hibernate.SessionFactory sessionFactory; static { try { configuration.configure(configFile); sessionFactory = configuration.buildSessionFactory(); } catch (Exception e) {} } private HibernateSessionFactory() {} public static Session getSession() throws HibernateException { Session session = (Session) threadLocal.get(); if (session == null || !session.isOpen()) { if (sessionFactory == null) { rebuildSessionFactory();//This method basically does what the static init block does } session = (sessionFactory != null) ? sessionFactory.openSession(): null; threadLocal.set(session); } return session; } //More non relevant methods here. Now from my testing it seems that the threadLocal member is indeed initialized only once when the class is first loaded by the JVM but for some reason when different threads access the getSession() method they use different sessions. When a thread first accesses this class (Session) threadLocal.get(); will return null but as expected all other access requests will yeild the same session. I'm not sure how this can be happening as the threadLocal variable is final and the method threadLocal.set(session) is only used in the above context (which I'm 99.9% sure has to yeild a non null session as I would have encountered a NullPointerException at a different part of my app). I'm not sure this is relevant but these are the main parts of my hibernate.cfg.xml file: <hibernate-configuration> <session-factory> <property name="connection.url">someURL</property> <property name="connection.driver_class"> com.microsoft.sqlserver.jdbc.SQLServerDriver</property> <property name="dialect">org.hibernate.dialect.SQLServerDialect</property> <property name="hibernate.connection.isolation">1</property> <property name="hibernate.connection.username">User</property> <property name="hibernate.connection.password">Password</property> <property name="hibernate.connection.pool_size">10</property> <property name="show_sql">false</property> <property name="current_session_context_class">thread</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.cache.use_second_level_cache">false</property> <property name="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Mapping files --> I'd appreciate any help granted and of course if anyone has any questions I'd be happy to clarify. Ittai

    Read the article

  • Asynchronous database update in Django?

    - by Mark
    I have a big form on my site. When the users fill it out and submit it, most of the data just gets dumped to the database, and then they get redirected to a new page. However, I'd also like to use the data to query another site, and then parse the results. That might take a bit longer. It's not essential that the user sees these results right away, so I was wondering if it's possible to asynchronously call a function that will handle this, and then return an HttpResponse from my view like usual without making them wait? If so... how? Any particular libraries I should look at?

    Read the article

  • Is there any class in the .NET Framework to represent a holding container for objects?

    - by Charles Prakash Dasari
    I am looking for a class that defines a holding structure for an object. The value for this object could be set at a later time than when this container is created. It is useful to pass such a structure in lambdas or in callback functions etc. Say: class HoldObject<T> { public T Value { get; set; } public bool IsValueSet(); public void WaitUntilHasValue(); } // and then we could use it like so ... HoldObject<byte[]> downloadedBytes = new HoldObject<byte[]>(); DownloadBytes("http://www.stackoverflow.com", sender => downloadedBytes.Value = sender.GetBytes()); It is rather easy to define this structure, but I am trying to see if one is available in FCL. I also want this to be an efficient structure that has all needed features like thread safety, efficient waiting etc. Any help is greatly appreciated.

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • ThreadPoolExecutor fixed thread pool with custom behaviour

    - by Simone Margaritelli
    i'm new to this topic ... i'm using a ThreadPoolExecutor created with Executors.newFixedThreadPool( 10 ) and after the pool is full i'm starting to get a RejectedExecutionException . Is there a way to "force" the executor to put the new task in a "wait" status instead of rejecting it and starting it when the pool is freed ? Thanks Issue regarding this https://github.com/evilsocket/dsploit/issues/159 Line of code involved https://github.com/evilsocket/dsploit/blob/master/src/it/evilsocket/dsploit/net/NetworkDiscovery.java#L150

    Read the article

  • VB.NET Two different approaches to generic cross-threaded operations; which is better?

    - by BASnappl
    VB.NET 2010, .NET 4 Hello, I recently read about using SynchronizationContext objects to control the execution thread for some code. I have been using a generic subroutine to handle (possibly) cross-thread calls for things like updating UI controls that utilizes Invoke. I'm an amateur and have a hard time understanding the pros and cons of any particular approach. I am looking for some insight on which approach might be preferable and why. Update: This question is motivated, in part, by statements such as the following from the MSDN page on Control.InvokeRequired. An even better solution is to use the SynchronizationContext returned by SynchronizationContext rather than a control for cross-thread marshaling. Method 1: Public Sub InvokeControl(Of T As Control)(ByVal Control As T, ByVal Action As Action(Of T)) If Control.InvokeRequired Then Control.Invoke(New Action(Of T, Action(Of T))(AddressOf InvokeControl), New Object() {Control, Action}) Else Action(Control) End If End Sub Method 2: Public Sub UIAction(Of T As Control)(ByVal Control As T, ByVal Action As Action(Of Control)) SyncContext.Send(New Threading.SendOrPostCallback(Sub() Action(Control)), Nothing) End Sub Where SyncContext is a Threading.SynchronizationContext object defined in the constructor of my UI form: Public Sub New() InitializeComponent() SyncContext = WindowsFormsSynchronizationContext.Current End Sub Then, if I wanted to update a control (e.g., Label1) on the UI form, I would do: InvokeControl(Label1, Sub(x) x.Text = "hello") or UIAction(Label1, Sub(x) x.Text = "hello") So, what do y'all think? Is one way preferred or does it depend on the context? If you have the time, verbosity would be appreciated! Thanks in advance, Brian

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >