Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 36/66 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Java Non-Blocking HTTP Server

    - by Marcus
    I have written an application using embedded Jetty that makes network calls to other services. I presume that the serving threads are idle whilst waiting for the network calls to complete. Is there any way to have a worker thread that switches between requests to perform work that can be done at the current time and then when the network calls return also handle that? A request would be returned when all work has been completed for it. I know this is a common paradigm, and I have used it for non-blocking TCP networking, but I'm unsure as to how to achieve this on a Java HTTP server whilst also waiting on external results. Any links or explanations are appreciated. Thanks

    Read the article

  • Code runs 6 times slower with 2 threads than with 1

    - by Edward Bird
    So I have written some code to experiment with threads and do some testing. The code should create some numbers and then find the mean of those numbers. I think it is just easier to show you what I have so far. I was expecting with two threads that the code would run about 2 times as fast. Measuring it with a stopwatch I think it runs about 6 times slower! void findmean(std::vector<double>*, std::size_t, std::size_t, double*); int main(int argn, char** argv) { // Program entry point std::cout << "Generating data..." << std::endl; // Create a vector containing many variables std::vector<double> data; for(uint32_t i = 1; i <= 1024 * 1024 * 128; i ++) data.push_back(i); // Calculate mean using 1 core double mean = 0; std::cout << "Calculating mean, 1 Thread..." << std::endl; findmean(&data, 0, data.size(), &mean); mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Repeat, using two threads std::vector<std::thread> thread; std::vector<double> result; result.push_back(0.0); result.push_back(0.0); std::cout << "Calculating mean, 2 Threads..." << std::endl; // Run threads uint32_t halfsize = data.size() / 2; uint32_t A = 0; uint32_t B, C, D; // Split the data into two blocks if(data.size() % 2 == 0) { B = C = D = halfsize; } else if(data.size() % 2 == 1) { B = C = halfsize; D = hsz + 1; } // Run with two threads thread.push_back(std::thread(findmean, &data, A, B, &(result[0]))); thread.push_back(std::thread(findmean, &data, C, D , &(result[1]))); // Join threads thread[0].join(); thread[1].join(); // Calculate result mean = result[0] + result[1]; mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Return return EXIT_SUCCESS; } void findmean(std::vector<double>* datavec, std::size_t start, std::size_t length, double* result) { for(uint32_t i = 0; i < length; i ++) { *result += (*datavec).at(start + i); } } I don't think this code is exactly wonderful, if you could suggest ways of improving it then I would be grateful for that also.

    Read the article

  • Is ReaderWriterLockSlim.EnterUpgradeableReadLock() essentially the same as Monitor.Enter()?

    - by Neil Barnwell
    So I have a situation where I may have many, many reads and only the occasional write to a resource shared between multiple threads. A long time ago I read about ReaderWriterLock, and have read about ReaderWriterGate which attempts to mitigate the issue where many writes coming in trump reads and hurt performance. However, now I've become aware of ReaderWriterLockSlim... From the docs, I believe that there can only be one thread in "upgradeable mode" at any one time. In a situation where the only access I'm using is EnterUpgradeableReadLock() (which is appropriate for my scenario) then is there much difference to just sticking with lock(){}? Here's the excerpt: A thread that tries to enter upgradeable mode blocks if there is already a thread in upgradeable mode, if there are threads waiting to enter write mode, or if there is a single thread in write mode. Or, does the recursion policy make any difference to this?

    Read the article

  • Why AutoResetEvent and ManualResetEvent does not support name in the constructor?

    - by Ikaso
    On .NET Framework 2.0 AutoResetEvent and ManualResetEvent inherit from EventWaitHandle. The EventWaitHandle class has 4 different constructors. 3 of the constructors support giving a name to the event. On the other hand both ManualResetEvent and AutoResetEvent do not support naming and provide a single constructor that receives the initialState. I can simply inherit from EventWaitHandle and write my own implementation of those classes that support all the constructor overloads, but I don't like to re-invent the wheel if I do not have to. My questions are: Is there a special problem in naming events? Do you have any idea why Microsoft did not support it? Do you have a proposal better than inheriting from the EventWaitHandle class and calling the appropriate constructor as in the following example? public class MyAutoResetEvent: EventWaitHandle { public MyAutoResetEvent(bool initialState) : base(initialState, EventResetMode.AutoReset) { } public MyAutoResetEvent(bool initialState, string name) : base(initialState, EventResetMode.AutoReset, name) { } public MyAutoResetEvent(bool initialState, string name, out bool createdNew) : base(initialState, EventResetMode.AutoReset, name, out createdNew) { } public MyAutoResetEvent(bool initialState, string name, out bool createdNew, EventWaitHandleSecurity eventSecurity) : base(initialState, EventResetMode.AutoReset, string.Empty, out createdNew, eventSecurity) { } }

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • Thread locking issue with FileHelpers between calling engine.ReadNext() method and readign engine.Li

    - by Rad
    I use producer/consumer pattern with FileHelpers library to import data from one file (which can be huge) using multiple threads. Each thread is supposed to import a chunk of that file and I would like to use LineNumber property of the FileHelperAsyncEngine instance that is reading the file as primary key for imported rows. FileHelperAsyncEngine internally has an IEnumerator IEnumerable.GetEnumerator(); which is iterated over using engine.ReadNext() method. That internally sets LineNumber property (which seems is not thread safe). Consumers will have Producers assiciated with them that will supply DataTables to Consumers which will consume them via SqlBulkLoad class which will use IDataReader implementation which will iterate over a collection of DataTables which are internal to a Consumer instance. Each instance of will have one SqlBulkCopy instance associate with it. I have thread locking issue. Below is how I create multiple Producer threads. I start each thread afterwords. Produce method on a producer instance will be called determining which chunk of input file will be processed. It seems that engine.LineNumber is not thread safe and I doesn't import a proper LineNumber in the database. It seems that by the time engine.LineNumber is read some other thread called engine.ReadNext() and changed engine.LineNumber property. I don't want to lock the loop that is supposed to process a chunk of input file because I loose parallelism. How to reorganize the code to solve this threading issue? Thanks Rad for (int i = 0; i < numberOfProducerThreads; i++) DataConsumer consumer = dataConsumers[i]; //create a new producer DataProducer producer = new DataProducer(); //consumer has already being created consumer.Subscribe(producer); FileHelperAsyncEngine orderDetailEngine = new FileHelperAsyncEngine(recordType); orderDetailEngine.Options.RecordCondition.Condition = RecordCondition.ExcludeIfBegins; orderDetailEngine.Options.RecordCondition.Selector = STR_ORDR; int skipLines = i * numberOfBufferTablesToProcess * DataBuffer.MaxBufferRowCount; Thread newThread = new Thread(() => { producer.Produce(consumer, inputFilePath, lineNumberFieldName, dict, orderDetailEngine, skipLines, numberOfBufferTablesToProcess); consumer.SetEndOfData(producer); }); producerThreads.Add(newThread); thread.Start();} public void Produce(DataConsumer consumer, string inputFilePath, string lineNumberFieldName, Dictionary<string, object> dict, FileHelperAsyncEngine engine, int skipLines, int numberOfBufferTablesToProcess) { lock (this) { engine.Options.IgnoreFirstLines = skipLines; engine.BeginReadFile(inputFilePath); } int rowCount = 1; DataTable buffer = consumer.BufferDataTable; while (engine.ReadNext() != null) { lock (this) { dict[lineNumberFieldName] = engine.LineNumber; buffer.Rows.Add(ObjectFieldsDataRowMapper.MapObjectFieldsToDataRow(engine.LastRecord, dict, buffer)); if (rowCount % DataBuffer.MaxBufferRowCount == 0) { consumer.AddBufferDataTable(buffer); buffer = consumer.BufferDataTable; } if (rowCount % (numberOfBufferTablesToProcess * DataBuffer.MaxBufferRowCount) == 0) { break; } rowCount++; } } if (buffer.Rows.Count > 0) { consumer.AddBufferDataTable(buffer); } engine.Close(); }

    Read the article

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • Question about how to implement a c# host application with a plugin-like architecture

    - by devoured elysium
    I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services. My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing. On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it. What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this? Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application. Thanks

    Read the article

  • How to detect that the internet connection has got disconnected through a java desktop application?

    - by Yatendra Goel
    I am developing a Java Desktop Application that access internet. It is a multi-threaded application, each thread do the same work (means each thread is an instance of same Thread class). Now, as all the threads need internet connection to be active, there should be some mechanism that detects whether an internet connection is active or not. Q1. How to detect whether the internet connection is active or not? Q2. Where to implement this internet-status-check-mechanism code? Should I start a separate thread for checking internet status regularly and notifies all the threads when the status changes from one state to another? Or should I let each thread check for the internet-status itself? Q3. This issue should be a very common issue as every application accessing an internet should deal with this problem. So how other developers usually deal with this problem? Q4. If you could give me a reference to a good demo application that addresses this issue then it would greatly help me.

    Read the article

  • pthread_exit and/or pthread_join causing Abort and SegFaults.

    - by MJewkes
    The following code is a simple thread game, that switches between threads causing the timer to decrease. It works fine for 3 threads, causes and Abort(core dumped) for 4 threads, and causes a seg fault for 5 or more threads. Anyone have any idea why this might be happening? #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <assert.h> int volatile num_of_threads; int volatile time_per_round; int volatile time_left; int volatile turn_id; int volatile thread_running; int volatile can_check; void * player (void * id_in){ int id= (int)id_in; while(1){ if(can_check){ if (time_left<=0){ break; } can_check=0; if(thread_running){ if(turn_id==id-1){ turn_id=random()%num_of_threads; time_left--; } } can_check=1; } } pthread_exit(NULL); } int main(int argc, char *args[]){ int i; int buffer; pthread_t * threads =(pthread_t *)malloc(num_of_threads*sizeof(pthread_t)); thread_running=0; num_of_threads=atoi(args[1]); can_check=0; time_per_round = atoi(args[2]); time_left=time_per_round; srandom(time(NULL)); //Create Threads for (i=0;i<num_of_threads;i++){ do{ buffer=pthread_create(&threads[i],NULL,player,(void *)(i+1)); }while(buffer == EAGAIN); } can_check=1; time_left=time_per_round; turn_id=random()%num_of_threads; thread_running=1; for (i=0;i<num_of_threads;i++){ assert(!pthread_join(threads[i], NULL)); } return 0; }

    Read the article

  • .NET Windows Service, threads and garbage collection (possible memory leaks)

    - by Evgeny
    I am developing a .NET Windows service that is creating a couple of threads and then uses these threads to send print jobs to printers (there is a thread for each printer). I have some issues which sometimes can be fixed by restarting the service. Some issues also arise when the service has been running for a while. This makes me suspect a possible memory leak. So, a couple of questions: Would a garbage collector collect an object if it was created inside a thread, or will the object exist until the thread is stopped/terminated? What tools can I use to monitor the amount of memory used by a Windows service and by a thread that I am starting programmatically?

    Read the article

  • Handling Incoming Data from Multiple Sockets in Python

    - by user859434
    Background: I have a current implementation that receives data from about 120 different socket connections in python. In my current implementation, I handle each of these separate socket connections with a dedicated thread for each. Each of these threads parse the data and eventually store it within a shared locked dictionary. These sockets DO NOT have uniform data rates, some sockets get more data than others. Question: Is this the best way to handle incoming data in python, or does python have a better way on handling multiple sockets per thread?

    Read the article

  • What is the JVM Scheduling algorithm ?

    - by IHawk
    Hello ! I am really curious about how does the JVM work with threads ! In my searches in internet, I found some material about RTSJ, but I don't know if it's the right directions for my answers. I also found this topic in sun's forums, http://forums.sun.com/thread.jspa?forumID=513&threadID=472453, but that's not satisfatory. Can someone give me some directions, material, articles or suggestion about the JVM scheduling algorithm ? I am also looking for information about the default configurations of Java threads in the scheduler, like 'how long does it take for every thread' in case of time-slicing. And this stuff. I would appreciate any help ! Thank you !

    Read the article

  • Different standard streams per POSIX thread

    - by Roman Nikitchenko
    Is there any possibility to achieve different redirections for standard output like printf(3) for different POSIX thread? What about standard input? I have lot of code based on standard input/output and I only can separate this code into different POSIX thread, not process. Linux operation system, C standard library. I know I can refactor code to replace printf() to fprintf() and further in this style. But in this case I need to provide some kind of context which old code doesn't have. So doesn't anybody have better idea (look into code below)? #include <pthread.h> #include <stdio.h> void* different_thread(void*) { // Something to redirect standard output which doesn't affect main thread. // ... // printf() shall go to different stream. printf("subthread test\n"); return NULL; } int main() { pthread_t id; pthread_create(&id, NULL, different_thread, NULL); // In main thread things should be printed normally... printf("main thread test\n"); pthread_join(id, NULL); return 0; }

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • Creating WPF components in background thread

    - by mizipzor
    Im working on a reporting system, a series of DocumentPage are to be created through a DocumentPaginator. These documents include a number of WPF components that are to be instantiated so the paginator includes the correct things when later sent to the XpsDocumentWriter (which in turn is sent to the actual printer). My problem now is that the DocumentPage instances take quite a while to create (enough for Windows to mark the application as frozen) so I tried to create them in a background thread, which is problematic since WPF expects the attributes on them to be set from the GUI thread. I would also like to have a progress bar showing up, indicating how many pages have been created so far. Thus, it looks like Im trying to get two things to happen in parallell on the GUI. The problem is hard to explain and Im really not sure how to tackle it. In short: Create a series of DocumentPage's. These include WPF components These are to be created on a background thread, or use some other trick so the application isnt frozen. After each page is created, a WPF ProgressBar should be updated. If there is no decent way to do this, alternate solutions and approaches are more than welcome.

    Read the article

  • Windows Service suddenly doing nothing

    - by TB
    Hi, My windows service is using a Thread (not a timer) which is always looping and sleeps for 1 second every loop using : evet.WaitOne(interval); When I start the service it works fine and I can see in the task manager that it is running, consuming and releasing memory, consuming processor ... etc that is all normal, but after a while (random amount of time) the service simply stops!! it is still there in the task manager but it is not consuming any processor work now and its consumption to the memory is not changing. it simply (died but still there in the task manager like a Zombie). I know that many exceptions might have happened during running the service (it is really doing many things) but all those exceptions are handled in Try catch blocks, so why is my "always looping" thread stops ??? This thread also logs every time he loops, when he is freezig in this way he is not logging anything (of course)

    Read the article

  • How do I make my ArrayList Thread-Safe? Another approach to problem in Java?

    - by thechiman
    I have an ArrayList that I want to use to hold RaceCar objects that extend the Thread class as soon as they are finished executing. A class, called Race, handles this ArrayList using a callback method that the RaceCar object calls when it is finished executing. The callback method, addFinisher(RaceCar finisher), adds the RaceCar object to the ArrayList. This is supposed to give the order in which the Threads finish executing. I know that ArrayList isn't synchronized and thus isn't thread-safe. I tried using the Collections.synchronizedCollection(c Collection) method by passing in a new ArrayList and assigning the returned Collection to an ArrayList. However, this gives me a compiler error: Race.java:41: incompatible types found : java.util.Collection required: java.util.ArrayList finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); Here is the relevant code: public class Race implements RaceListener { private Thread[] racers; private ArrayList finishingOrder; //Make an ArrayList to hold RaceCar objects to determine winners finishingOrder = Collections.synchronizedCollection(new ArrayList(numberOfRaceCars)); //Fill array with RaceCar objects for(int i=0; i<numberOfRaceCars; i++) { racers[i] = new RaceCar(laps, inputs[i]); //Add this as a RaceListener to each RaceCar ((RaceCar) racers[i]).addRaceListener(this); } //Implement the one method in the RaceListener interface public void addFinisher(RaceCar finisher) { finishingOrder.add(finisher); } What I need to know is, am I using a correct approach and if not, what should I use to make my code thread-safe? Thanks for the help!

    Read the article

  • Is System.nanoTime() consistent across threads?

    - by obvio171
    I want to count the time elapsed between two events in nanoseconds. To do that, I can use System.nanoTime() as mentioned here. The problem is that the two events are happening in different threads. Since nanoTime() doesn't return an absolute timestamp but instead can only be used to calculate time differences, I'd like to know if the values I get on the two different threads are consistent with the physical time elapsed between the two events.

    Read the article

  • remote function with pthread

    - by user311130
    Hi all, I wrote some code in c, using pthread (I configured the linker and compiler in eclipse IDE first). #include <pthread.h> #include "starter.h" #include "UI.h" Page* MM; Page* Disk; PCB* all_pcb_array; void* display_prompt(void *id){ printf("Hello111\n"); return NULL; } int main(int argc, char** argv) { printf("Hello\n"); pthread_t *thread = (pthread_t*) malloc (sizeof(pthread_t)); pthread_create(thread, NULL, display_prompt, NULL); printf("Hello\n"); return 1; } that works fine. However, when I move display_prompt to UI.h no "Hello111 " output is printed. anyone know how to solve that? Elad

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >