Search Results

Search found 9705 results on 389 pages for 'boost thread'.

Page 88/389 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • list and explanations of ways to boost this router's signal strength? [closed]

    - by barlop
    Possible Duplicates: Improve Wireless Signal How to get wireless coverage over my whole house? What's the best way to increase the range of my 802.11g router? The back of my house doesn't have WiFi Signal I'm interested in ways that are both specific to certain routers, and generic. When I say generic, I don't necessarily mean a one way that works for many.. but it can also be a generic answer, so mentioning solutions for different situations. So not just the one router I mention. Explanations are important, as well as all the ways. One i'm particularly interested in boosting the strength of is this wireless router/modem Netgear VMDG280 maybe anywhere in a big house with three floors, maybe from the garden.

    Read the article

  • Why would delaying a thread response fix view corruption?

    - by Beth S
    6 times out of 10 my very simple iPhone app is getting a corrupted display on launch or crashes randomly. But it behaves fine in the simulator. The display corruption looks like mis-colored fonts, out of place font text, wrong background colors, etc. I've found a strange work-around.. when my thread delays by 2 seconds before calling the "done" notification, everything works swimmingly. The thread reads a web page and the "done" notification loads up a PickerView with strings. So what gives? Can I not safely initiate a threaded task from viewDidLoad? - (void) loadWebPage:(NSString *)urlAddition { NSAutoreleasePool *subPool = [[NSAutoreleasePool alloc] init]; NSString *pageSource; NSError *err; NSString *urlString = [NSString stringWithString:@"http://server/%@", urlAddition]; pageSource = [NSString stringWithContentsOfURL:[NSURL URLWithString: urlString] encoding:NSUTF8StringEncoding error:&err]; [NSThread sleepForTimeInterval:2.0]; // THIS STOPS THE DISPLAY CORRUPTION [[NSNotificationCenter defaultCenter] postNotificationName:@"webDoneNotification" object:nil]; [subPool drain]; } - (void) webDoneNotification: (NSNotification *)pNotification { [mediaArray release]; mediaArray = [[NSArray arrayWithObjects: [NSString stringWithString:@"new pickerview text"], nil] retain]; [mediaPickerView reloadAllComponents]; [mediaPickerView selectRow:0 inComponent:0 animated:NO]; } - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { mediaArray = [[NSArray arrayWithObjects: [NSString stringWithString:@"init pickerview text"], nil] retain]; if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; myWebThread = [[WebThread alloc] initWithDelegate:self]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(webDoneNotification:) name:@"webDoneNotification" object:nil]; [myWebThread performSelectorInBackground:@selector(loadWebPage:) withObject:@""]; } Thanks! Update: Even a delay of 0.1 seconds is enough to completely fix the problem.

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • Why onCreate() calling multiple times when i use Thread()?

    - by RajaReddy PolamReddy
    In my app i faced a problem with threads. i am using native code in my app. i try to load library and then calling native functions from the android code. 1. By using Threads() : PjsuaThread pjsuaThread = new PjsuaThread(); pjsuaThread.start(); thread code class PjsuaThread extends Thread { public void run() { if (pjsua_app.initApp() != 0) { // native function calling return; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); // native function calling finished = true; } When i use code like this, onCreate() function calling multiple times and able to load library and calling some functions properly, after some seconds onCreate calling again because of that it's crashing. 2. Using AsyncTask(): And also i used AsyncTask< for this requirement, it's crashing the application( crashing in lib code ). not able to open any functions class SipTask extends AsyncTask<Void, String, Void> { protected Void doInBackground(Void... args) { if (pjsua_app.initApp() != 0) { return null; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); finished = true; return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); Log.i(TAG, "On POst "); } } What is annoying is that in most cases it is not the missing library, it's tried to able to load the lib crashing in between. any one know the reason ?

    Read the article

  • How can I get back into my main processing thread?

    - by daveomcd
    I have an app that I'm accessing a remote website with NSURLConnection to run some code and then save out some XML files. I am then accessing those XML Files and parsing through them for information. The process works fine except that my User Interface isn't getting updated properly. I want to keep the user updated through my UILabel. I'm trying to update the text by using setBottomBarToUpdating:. It works the first time when I set it to "Processing Please Wait..."; however, in the connectionDidFinishLoading: it doesn't update. I'm thinking my NSURLConnection is running on a separate thread and my attempt with the dispatch_get_main_queue to update on the main thread isn't working. How can I alter my code to resolve this? Thanks! [If I need to include more information/code just let me know!] myFile.m NSLog(@"Refreshing..."); dispatch_sync( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self getResponse:@"http://mylocation/path/to/file.aspx"]; }); [self setBottomBarToUpdating:@"Processing Please Wait..."]; queue = dispatch_queue_create("updateQueue", DISPATCH_QUEUE_CONCURRENT); connectionDidFinishLoading: if ([response rangeOfString:@"Complete"].location == NSNotFound]) { // failed } else { //success dispatch_async(dispatch_get_main_queue(),^ { [self setBottomBarToUpdating:@"Updating Contacts..."]; }); [self updateFromXMLFile:@"http://thislocation.com/path/to/file.xml"]; dispatch_async(dispatch_get_main_queue(),^ { [self setBottomBarToUpdating:@"Updating Emails..."]; }); [self updateFromXMLFile:@"http://thislocation.com/path/to/file2.xml"]; }

    Read the article

  • Installing PySide - OSX

    - by jeremynealbrown
    Anyone had success installing and using PySide on OSX? I am following the install instructions on the PySide site, though I'm running into issues building the API Extractor. I run cmake on the CMakeLists.txt file inside the api extractor dir and: This error is thrown- CMake Error at /Applications/CMake 2.8-0.app/Contents/share/cmake-2.8/Modules/FindBoost.cmake:894 (message): Unable to find the requested Boost libraries. Unable to find the Boost header files. Please set BOOST_ROOT to the root directory containing Boost or BOOST_INCLUDEDIR to the directory containing Boost's headers. Call Stack (most recent call first): CMakeLists.txt:5 (find_package) I am new to building source w/ cmake and I'm not event really sure what Boost is. Any light you might shed on the set up process would be great. Thanks

    Read the article

  • Singleton code linker errors in vc 9.0. Runs fine in linux compiled with gcc

    - by user306560
    I have a simple logger that is implemented as a singleton. It works like i want when I compile and run it with g++ in linux but when I compile in Visual Studio 9.0 with vc++ I get the following errors. Is there a way to fix this? I don't mind changing the logger class around, but I would like to avoid changing how it is called. 1>Linking... 1>loggerTest.obj : error LNK2005: "public: static class Logger * __cdecl Logger::getInstance(void)" (?getInstance@Logger@@SAPAV1@XZ) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "public: void __thiscall Logger::log(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?log@Logger@@QAEXABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "public: void __thiscall Logger::closeLog(void)" (?closeLog@Logger@@QAEXXZ) already defined in Logger.obj 1>loggerTest.obj : error LNK2005: "private: static class Logger * Logger::_instance" (?_instance@Logger@@0PAV1@A) already defined in Logger.obj 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > Logger::_path" (?_path@Logger@@0V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > Logger::_path" (?_path@Logger@@0V?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@A) 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class boost::mutex Logger::_mutex" (?_mutex@Logger@@0Vmutex@boost@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class boost::mutex Logger::_mutex" (?_mutex@Logger@@0Vmutex@boost@@A) 1>Logger.obj : error LNK2001: unresolved external symbol "private: static class std::basic_ofstream<char,struct std::char_traits<char> > Logger::_log" (?_log@Logger@@0V?$basic_ofstream@DU?$char_traits@D@std@@@std@@A) 1>loggerTest.obj : error LNK2001: unresolved external symbol "private: static class std::basic_ofstream<char,struct std::char_traits<char> > Logger::_log" (?_log@Logger@@0V?$basic_ofstream@DU?$char_traits@D@std@@@std@@A) The code, three files Logger.h Logger.cpp test.cpp #ifndef __LOGGER_CPP__ #define __LOGGER_CPP__ #include "Logger.h" Logger* Logger::_instance = 0; //string Logger::_path = "log"; //ofstream Logger::_log; //boost::mutex Logger::_mutex; Logger* Logger::getInstance(){ { boost::mutex::scoped_lock lock(_mutex); if(_instance == 0) { _instance = new Logger; _path = "log"; } } //mutex return _instance; } void Logger::log(const std::string& msg){ { boost::mutex::scoped_lock lock(_mutex); if(!_log.is_open()){ _log.open(_path.c_str()); } if(_log.is_open()){ _log << msg.c_str() << std::endl; } } } void Logger::closeLog(){ Logger::_log.close(); } #endif ` ... #ifndef __LOGGER_H__ #define __LOGGER_H__ #include <iostream> #include <string> #include <fstream> #include <boost/thread/mutex.hpp> #include <boost/thread.hpp> using namespace std; class Logger { public: static Logger* getInstance(); void log(const std::string& msg); void closeLog(); protected: Logger(){} private: static Logger* _instance; static string _path; static bool _logOpen; static ofstream _log; static boost::mutex _mutex; //check mutable }; #endif test.cpp ` #include <iostream> #include "Logger.cpp" using namespace std; int main(int argc, char *argv[]) { Logger* log = Logger::getInstance(); log->log("hello world\n"); return 0; }

    Read the article

  • How to use autoconf with C++0x features

    - by themis
    What are the best practices for using autoconf in conjunction with shared_ptr and other TR1/BOOST C++0x templates so as to maximize portability and maintainability? With autoconf I can determine whether shared_ptr is available as std::tr1::shared_ptr and/or boost::shared_ptr. Given that the same feature has two different names, I have the following questions: In the code, how should shared_ptr be referenced? Should std::tr1::shared_ptr be preferred over boost::shared_ptr? For the first, the code is currently using preprocessor conditionals allowing non-qualified references to shared_ptr, a la #if HAVE_STD_TR1_SHARED_PTR using std::tr1::shared_ptr; #elif HAVE_BOOST_SHARED_PTR using boost::shared_ptr; #else #error "No definition for shared_ptr found" #endif Second, the code uses std::tr1:: over boost:: to minimize dependencies on external libraries (even if the the libraries are widely used). Are these two solutions common? Are there better ones?

    Read the article

  • Qthread - trouble shutting down threads

    - by Bryan Greenway
    For the last few days, I've been trying out the new preferred approach for using QThreads without subclassing QThread. The trouble I'm having is when I try to shutdown a set of threads that I created. I regularly get a "Destroyed while thread is still running" message (if I'm running in Debug mode, I also get a Segmentation Fault dialog). My code is very simple, and I've tried to follow the examples that I've been able to find on the internet. My basic setup is as follows: I've a simple class that I want to run in a separate thread; in fact, I want to run 5 instances of this class, each in a separate thread. I have a simple dialog with a button to start each thread, and a button to stop each thread (10 buttons). When I click one of the "start" buttons, a new instance of the test class is created, a new QThread is created, a movetothread is called to get the test class object to the thread...also, since I have a couple of other members in the test class that need to move to the thread, I call movetothread a few additional times with these other items. Note that one of these items is a QUdpSocket, and although this may not make sense, I wanted to make sure that sockets could be moved to a separate thread in this fashion...I haven't tested the use of the socket in the thread at this point. Starting of the threads all seem to work fine. When I use the linux top command to see if the threads are created and running, they show up as expected. The problem occurs when I begin stopping the threads. I randomly (or it appears to be random) get the error described above. Class that is to run in separate thread: // Declaration class TestClass : public QObject { Q_OBJECT public: explicit TestClass(QObject *parent = 0); QTimer m_workTimer; QUdpSocket m_socket; Q_SIGNALS: void finished(); public Q_SLOTS: void start(); void stop(); void doWork(); }; // Implementation TestClass::TestClass(QObject *parent) : QObject(parent) { } void TestClass::start() { connect(&m_workTimer, SIGNAL(timeout()),this,SLOT(doWork())); m_workTimer.start(50); } void TestClass::stop() { m_workTimer.stop(); emit finished(); } void TestClass::doWork() { int j; for(int i = 0; i<10000; i++) { j = i; } } Inside my main app, code called to start the first thread (similar code exists for each of the other threads): mp_thread1 = new QThread(); mp_testClass1 = new TestClass(); mp_testClass1->moveToThread(mp_thread1); mp_testClass1->m_socket.moveToThread(mp_thread1); mp_testClass1->m_workTimer.moveToThread(mp_thread1); connect(mp_thread1, SIGNAL(started()), mp_testClass1, SLOT(start())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(quit())); connect(mp_testClass1, SIGNAL(finished()), mp_testClass1, SLOT(deleteLater())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(deleteLater())); connect(this,SIGNAL(stop1()),mp_testClass1,SLOT(stop())); mp_thread1->start(); Also inside my main app, this code is called when a stop button is clicked for a specific thread (in this case thread 1): emit stop1(); Sometimes it appears that threads are stopped and destroyed without issue. Other times, I get the error described above. Any guidance would be greatly appreciated. Thanks, Bryan

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • .NET Backgroundworker - Is there no way to let exceptions pass back normally to main thread?

    - by Greg
    Hi, QUESTION: Re use of .NET Backgroundworker, is there not a way to let exceptions pass back normally to main thread? BACKGROUND: Currently in my WinForms application I have generic exception handle that goes along the lines of, if (a) a custom app exception then present to user, but don't exit program, and (b) if other exception then present and then exit application The above is nice as I can just throw the appropriate exception anywhere in the application and the presentation/handling is handled generically

    Read the article

  • Dynamic changes to thread stack size in Solaris 9 ?

    - by Satya
    Hello, I am looking for a configurable / tunable on Solaris 9 through which I can change the default thread stack size without recompiling the code to use "pthread_attr_setstacksize" For example on HPUX 11.11 / 11.23 the environment variable "PTHREAD_DEFAULT_STACK_SIZE" can be exported (available via HPUX patches PHCO_38307 / PHCO_38955 ) - Is there a equivalent Solaris 9 way to achieve the same ? Thanks! Satya

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • Unexpected behavior of IntentService

    - by kknight
    I used IntentService in my code instead of Service because IntentService creates a thread for me in onHandleIntent(Intent intent), so I don't have to create a Thead myself in the code of my service. I expected that two intents to the same IntentSerivce will execute in parallel because a thread is generated in IntentService for each invent. But my code turned out that the two intents executed in sequential way. This is my IntentService code: public class UpdateService extends IntentService { public static final String TAG = "HelloTestIntentService"; public UpdateService() { super("News UpdateService"); } protected void onHandleIntent(Intent intent) { String userAction = intent .getStringExtra("userAction"); Log.v(TAG, "" + new Date() + ", In onHandleIntent for userAction = " + userAction + ", thread id = " + Thread.currentThread().getId()); if ("1".equals(userAction)) { try { Thread.sleep(20 * 1000); } catch (InterruptedException e) { Log.e(TAG, "error", e); } Log.v(TAG, "" + new Date() + ", This thread is waked up."); } } } And the code call the service is below: public class HelloTest extends Activity { //@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Intent selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "1"); this.startService(selectIntent); selectIntent = new Intent(this, UpdateService.class); selectIntent.putExtra("userAction", "2"); this.startService(selectIntent); } } I saw this log message in the log: V/HelloTestIntentService( 848): Wed May 05 14:59:37 PDT 2010, In onHandleIntent for userAction = 1, thread id = 8 D/dalvikvm( 609): GC freed 941 objects / 55672 bytes in 99ms V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, This thread is waked up. V/HelloTestIntentService( 848): Wed May 05 15:00:00 PDT 2010, In onHandleIntent for userAction = 2, thread id = 8 I/ActivityManager( 568): Stopping service: com.example.android/.UpdateService The log shows that the second intent waited the first intent to finish and they are in the same thread. It there anything I misunderstood of IntentService. To make two service intents execute in parallel, do I have to replace IntentService with service and start a thread myself in the service code? Thanks.

    Read the article

  • Is it allowed to load Swing classes in non-EDT thread?

    - by ddimitrov
    After the introduction of Java Memory Model, the Swing guidelines were changed to state that any Swing components need to be instantiated on the EDT in order to avoid non-published instance state. What I could not find anywhere is whether the classloading is also mandated to be on the EDT or can we pre-load key Swing classes in a background thread? Is there any official statement from Sun/Oracle on this? Are there any classes that are known to hold non-threadsafe static state, hence need to be loaded on EDT?

    Read the article

  • Is this too much code for a header only library?

    - by Billy ONeal
    It seems like I had to inline quite a bit of code here. I'm wondering if it's bad design practice to leave this entirely in a header file like this: #pragma once #include <string> #include <boost/noncopyable.hpp> #include <boost/make_shared.hpp> #include <boost/iterator/iterator_facade.hpp> #include <Windows.h> #include "../Exception.hpp" namespace WindowsAPI { namespace FileSystem { class FileData; struct AllResults; struct FilesOnly; template <typename Filter_T = AllResults> class DirectoryIterator; namespace detail { class DirectoryIteratorImpl : public boost::noncopyable { WIN32_FIND_DATAW currentData; HANDLE hFind; std::wstring root; public: inline DirectoryIteratorImpl(); inline explicit DirectoryIteratorImpl(const std::wstring& pathSpec); inline void increment(); inline bool equal(const DirectoryIteratorImpl& other) const; inline const std::wstring& GetPathRoot() const; inline const WIN32_FIND_DATAW& GetCurrentFindData() const; inline ~DirectoryIteratorImpl(); }; } class FileData //Serves as a proxy to the WIN32_FIND_DATA struture inside the iterator. { boost::shared_ptr<detail::DirectoryIteratorImpl> iteratorSource; public: FileData(const boost::shared_ptr<detail::DirectoryIteratorImpl>& parent) : iteratorSource(parent) {}; DWORD GetAttributes() const { return iteratorSource->GetCurrentFindData().dwFileAttributes; }; bool IsDirectory() const { return (GetAttributes() | FILE_ATTRIBUTE_DIRECTORY) != 0; }; bool IsFile() const { return !IsDirectory(); }; bool IsArchive() const { return (GetAttributes() | FILE_ATTRIBUTE_ARCHIVE) != 0; }; bool IsReadOnly() const { return (GetAttributes() | FILE_ATTRIBUTE_READONLY) != 0; }; unsigned __int64 GetSize() const { ULARGE_INTEGER intValue; intValue.LowPart = iteratorSource->GetCurrentFindData().nFileSizeLow; intValue.HighPart = iteratorSource->GetCurrentFindData().nFileSizeHigh; return intValue.QuadPart; }; std::wstring GetFolderPath() const { return iteratorSource->GetPathRoot(); }; std::wstring GetFileName() const { return iteratorSource->GetCurrentFindData().cFileName; }; std::wstring GetFullFileName() const { return GetFolderPath() + GetFileName(); }; std::wstring GetShortFileName() const { return iteratorSource->GetCurrentFindData().cAlternateFileName; }; FILETIME GetCreationTime() const { return iteratorSource->GetCurrentFindData().ftCreationTime; }; FILETIME GetLastAccessTime() const { return iteratorSource->GetCurrentFindData().ftLastAccessTime; }; FILETIME GetLastWriteTime() const { return iteratorSource->GetCurrentFindData().ftLastWriteTime; }; }; struct AllResults : public std::unary_function<const FileData&, bool> { bool operator()(const FileData&) { return true; }; }; struct FilesOnly : public std::unary_function<const FileData&, bool> { bool operator()(const FileData& arg) { return arg.IsFile(); }; }; template <typename Filter_T> class DirectoryIterator : public boost::iterator_facade<DirectoryIterator<Filter_T>, const FileData, std::input_iterator_tag> { friend class boost::iterator_core_access; boost::shared_ptr<detail::DirectoryIteratorImpl> impl; FileData current; Filter_T filter; void increment() { do { impl->increment(); } while (! filter(current)); }; bool equal(const DirectoryIterator& other) const { return impl->equal(*other.impl); }; const FileData& dereference() const { return current; }; public: DirectoryIterator(Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>()), current(impl), filter(functor) { }; explicit DirectoryIterator(const std::wstring& pathSpec, Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>(pathSpec)), current(impl), filter(functor) { }; }; namespace detail { DirectoryIteratorImpl::DirectoryIteratorImpl() : hFind(INVALID_HANDLE_VALUE) { } DirectoryIteratorImpl::DirectoryIteratorImpl(const std::wstring& pathSpec) { std::wstring::const_iterator lastSlash = std::find(pathSpec.rbegin(), pathSpec.rend(), L'\\').base(); root.assign(pathSpec.begin(), lastSlash); hFind = FindFirstFileW(pathSpec.c_str(), &currentData); if (hFind == INVALID_HANDLE_VALUE) WindowsApiException::ThrowFromLastError(); while (!wcscmp(currentData.cFileName, L".") || !wcscmp(currentData.cFileName, L"..")) { increment(); } } void DirectoryIteratorImpl::increment() { BOOL success = FindNextFile(hFind, &currentData); if (success) return; DWORD error = GetLastError(); if (error == ERROR_NO_MORE_FILES) { FindClose(hFind); hFind = INVALID_HANDLE_VALUE; } else { WindowsApiException::Throw(error); } } DirectoryIteratorImpl::~DirectoryIteratorImpl() { if (hFind != INVALID_HANDLE_VALUE) FindClose(hFind); } bool DirectoryIteratorImpl::equal(const DirectoryIteratorImpl& other) const { if (this == &other) return true; return hFind == other.hFind; } const std::wstring& DirectoryIteratorImpl::GetPathRoot() const { return root; } const WIN32_FIND_DATAW& DirectoryIteratorImpl::GetCurrentFindData() const { return currentData; } } }}

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • Exception in thread "main" java.lang.OutOfMemoryError, How to find and fix??

    - by or.nomore
    hey, I'm trying to programming a crossword creator. using a given dictionary txt file and a given pattern txt file. The basic idea is using DFS algorithm. the problem begin when the dictionary file is v-e-r-y big (about 50000 words). then i recive the : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded i know that there is a part in my program that waists memory, but i don't know where it is, how to find it and how to fix it

    Read the article

  • Is it ok to perform DB operation on UI thread?

    - by user648462
    I am using a database to persist the state of a search form. I am using the onPause method to persist the data and the onResume method to restore it. My opinion is that restoring and persisting state should be a blocking operation so I plan to perform the database operations on the UI thread. I know this is generally discouraged but the operations should be quick and I think if they were done asynchronously they could lead to inconsistent UI behaviour. Any advice

    Read the article

  • how to know when a work in a thread is complete?

    - by seinkraft
    I need to create multiple threads when a button is clicked and i've done that with this: Dim myThread As New Threading.Thread(AddressOf getFile) myThread.IsBackground = True myThread.Start() but i need to update a picture box with the downloaded file, buy if i set an event in the function getFile and raise it to notify that the files was downloaded and then update the picturebox.

    Read the article

  • How long does it take each thread timeslice in Windows XP ?

    - by IHawk
    I am trying to find out how long does it take each thread timeslice (quantum) in Windows but the only information that I found out is about the clock ticks being from 15 to 20ms or 20-30ms. How can I find this information ? I think it may vary from OS to OS, but I am not certain. I appreciate any suggestion on this subject. Thank you.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >