Search Results

Search found 9705 results on 389 pages for 'boost thread'.

Page 7/389 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Using Boost.Asio to get "the whole packet"

    - by wowus
    I have a TCP client connecting to my server which is sending raw data packets. How, using Boost.Asio, can I get the "whole" packet every time (asynchronously, of course)? Assume these packets can be any size up to the full size of my memory. Basically, I want to avoid creating a statically sized buffer.

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • How to reduce the time of clang_complete search through boost

    - by kirill_igum
    I like using clang with vim. The one problem that I always have is that whenever I include boost, clang goes through boost library every time I put "." after a an object name. It takes 5-10 seconds. Since I don't make changes to boost headers, is there a way to cache the search through boost? If not, is there a way to remove boost from the auto-completion search? update (1) in response to answer by adaszko after :let g:clang_use_library = 1 I type a name of a variable. I press ^N. Vim starts to search through boost tree. it auto-completes the variable. i press "." and get the following errors: Error detected while processing function ClangComplete: line 35: Traceback (most recent call last): Press ENTER or type command to continue Error detected while processing function ClangComplete: line 35: File "<string>", line 1, in <module> Press ENTER or type command to continue Error detected while processing function ClangComplete: line 35: NameError: name 'vim' is not defined Press ENTER or type command to continue Error detected while processing function ClangComplete: line 40: E121: Undefined variable: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 40: E15: Invalid expression: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 58: E121: Undefined variable: l:res Press ENTER or type command to continue Error detected while processing function ClangComplete: line 58: E15: Invalid expression: l:res Press ENTER or type command to continue ... and there is no auto-compeltion update (2) not sure if clang_complete should take care of the issue with boost. vim without plugins does search through boost. superuser has an answer to comment out search through boost dirs with set include=^\\s*#\\s*include\ \\(<boost/\\)\\@!

    Read the article

  • how a thread can signal when it's finished?

    - by Kyle
    #include <iostream> #include <boost/thread.hpp> using std::endl; using std::cout; using namespace boost; mutex running_mutex; struct dostuff { volatile bool running; dostuff() : running(true) {} void operator()(int x) { cout << "dostuff beginning " << x << endl; this_thread::sleep(posix_time::seconds(2)); cout << "dostuff is done doing stuff" << endl; mutex::scoped_lock running_lock(running_mutex); running = false; } }; bool is_running(dostuff& doer) { mutex::scoped_lock running_lock(running_mutex); return doer.running; } int main() { cout << "Begin.." << endl; dostuff doer; thread t(doer, 4); if (is_running(doer)) cout << "Cool, it's running.\n"; this_thread::sleep(posix_time::seconds(3)); if (!is_running(doer)) cout << "Cool, it's done now.\n"; else cout << "still running? why\n"; // This happens! :( return 0; } Why is the output of the above program: Begin.. Cool, it's running. dostuff beginning 4 dostuff is done doing stuff still running? why How can dostuff correctly flag when it is done? I do not want to sit around waiting for it, I just want to be notified when it's done.

    Read the article

  • BOOST program_options: parsing multiple argument list.

    - by Arman
    Hello, I would like to pass the multiple arguments with positive or negative values. Is it possible to parse it? Currently I have a following initialization: vector<int> IDlist; namespace po = boost::program_options; po::options_description commands("Allowed options"); commands.add_options() ("IDlist",po::value< vector<int> >(&IDlist)->multitoken(), "Which IDs to trace: ex. --IDlist=0 1 200 -2") ("help","print help") ; and I would like to call: ./test_ids.x --IDlist=0 1 200 -2 unknown option -2 So,the program_options assumes that I am passing -2 as an another option. Can I configure the program_options in such a way that it can accept the negative integer values? Thanks Arman.

    Read the article

  • Boost interprocess cached pools

    - by porgarmingduod
    I'm trying to figure out if my reading of the docs for boost interprocess allocators is correct. When using cached_adaptive_pool to allocate memory: typedef cached_adaptive_pool<int, managed_shared_memory::segment_manager> pool_allocator_t; pool_allocator_t pool_allocator(segment.get_segment_manager()); // Allocate an integer in the shared memory segment pool_allocator_t::pointer pool_allocator.allocate_one(); My understanding is that with multiple processes one can allocate and deallocate freely: That is, if I have a cached pool allocator for integers in one process, then it can deallocate integers allocated by similar pools in other processes (provided, of course, that they are working on the same shared memory segment). It may be a stupid question, but working with multiple processes and shared memory is hard enough, so I'd like to know 100% whether I got the basics right.

    Read the article

  • boost for each problem

    - by areslp
    std::map< std::string , std::string > matrix_int; typedef std::pair< std::string , std::string > lp_type; BOOST_FOREACH( lp_type &row, matrix_int ){ } this can not be complied: error C2440: 'initializing' : cannot convert from 'std::pair<_Ty1,_Ty2' to 'lp_type &' when I have ',' in element type, boost doc says I can use typedef or predefine a var; but what should I do when I want to get a reference?

    Read the article

  • Implementing Qt File Dialog with a Different File System Library (boost)

    - by knight
    Hi, I am writing an application which requires me to use another file system and file engine handlers and not the qt's default ones. Basically what I want to be able to do is to use qt's file dialog but have an underlying file system handler (for example built using boost file system library) of mine handling all the operations with regards to file and directory operations within that dialog. I have already written a custom file engine which handles some of the operations but I am now stuck with Qt's file system model and the file system watcher engine, as I need to have the signals transmitted for this custom file engine. Seems like I have a daunting task ahead. Am I heading in the right direction? Is there any other simpler way that I could implement this? Can anyone give me any idea on how to proceed. I was thinking of looking into proxy models but not sure if that would work. Thanks in advance for any help.

    Read the article

  • Boost::asio bug in MSVC10 - Failing BOOST_WORKAROUND in ~buffer_debug_check() in buffer.hpp

    - by shaz
    A straight compilation of example http://www.boost.org/doc/libs/1_43_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html results in a runtime null pointer exception. Stack trace points to the buffer_debug_check destructor which contains this comment: // MSVC's string iterator checking may crash in a std::string::iterator // object's destructor when the iterator points to an already-destroyed // std::string object, unless the iterator is cleared first. The test #if BOOST_WORKAROUND(BOOST_MSVC, = 1400) succeeds in MSVC10 and (but) results in a null pointer exception in c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\xutility line 123 _Iterator_base12& operator=(const _Iterator_base12& _Right) { // assign an iterator if (_Myproxy != _Right._Myproxy) _Adopt(_Right._Myproxy->_Mycont); return (*this); } _Right._Myproxy is NULL

    Read the article

  • boost::python string-convertible properties

    - by Checkers
    I have a C++ class, which has the following methods: class Bar { ... const Foo& getFoo() const; void setFoo(const Foo&); }; where class Foo is convertible to std::string (it has an implicit constructor from std::string and an std::string cast operator). I define a Boost.Python wrapper class, which, among other things, defines a property based on previous two functions: class_<Bar>("Bar") ... .add_property( "foo", make_function( &Bar::getFoo, return_value_policy<return_by_value>()), &Bar::setFoo) ... I also mark the class as convertible to/from std::string. implicitly_convertible<std::string, Foo>(); implicitly_convertible<Foo, std::string>(); But at runtime I still get a conversion error trying to access this property: TypeError: No to_python (by-value) converter found for C++ type: Foo How to achieve the conversion without too much boilerplate of wrapper functions? (I already have all the conversion functions in class Foo, so duplication is undesirable.

    Read the article

  • Building MySQL with boost on windows

    - by user13177919
    As you've probably heard already MySQL needs boost to build. However, in the good ol' MySQL tradition, the above link does give you only the instructions on how to build it on linux. And completely ignores the fact that there're other OSes too that people develop on. To fill in that gap, I've compiled a small step by step guide on how to do it on windows. Note that I always, as a principle, build out-of-source. The typical setup I have is : bzr clone lp:~mysql/mysql-server/5.7 mysql-trunkcd mysql-trunkmkdir bldcd bldcmake -DWITH_DEBUG=1 -DMYSQL_PROJECT_NAME=mysql-trunk ..devenv /build debug mysql-trunk.sln This has been tested to work on a 32 bit compile using VS2013 on a Windows7 64 bit build. Note that you'll need other things too (bison, eventually openssl etc) that I will assume you already have set up. Steps: Download Boost 1.55.0. It's the *only* version that is known to work currently. Extract boost_1_55_0/ from the zip to c:\boost\boost_1_55_0 Go to Control Panel/System/Environment variables and set WITH_BOOST=C:\boost\boost_1_55_0 in User variables. Make sure you restart your open command line terminal windows after this !  If you're upgrading from non-boost build, remove your bld/ directory and create a new one. run cmake as you'd typically do. You should get: -- Local boost dir C:/boost/boost_1_55_0 -- Local boost zip LOCAL_BOOST_ZIP-NOTFOUND -- BOOST_VERSION_NUMBER is #define BOOST_VERSION 105500 -- BOOST_INCLUDE_DIR C:/boost/boost_1_55_0 Build as normal (devenv /build debug ...). It should work.

    Read the article

  • Java thread dump where main thread has no call stack? (jsvc)

    - by dwhsix
    We have a java process running as a daemon (under jsvc). Every several days it just stops doing any work; output to the logfile stops (it is pretty verbose, on 5-minute intervals) and it consumes no CPU or IO. There are no exceptions logged in the logfile nor in syserr or sysout. The last log statement is just prior to a db commit being done, but there is no open connection on the db server (MySQL) and reviewing the code, there should always be additional log output after that, even if it had encountered an exception that was going to bubble up. The most curious thing I find is that in the thread dump (included below), there's no thread in our code at all, and the main thread seems to have no context whatsoever: "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE As noted earlier, this is a daemon process running using jsvc, but I don't know if that has anything to do with it (I can restructure the code to also allow running it directly, to test). Any suggestions on what might be happening here? Thanks... dwh Full thread dump: Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode): "MySQL Statement Cancellation Timer" daemon prio=10 tid=0x00002aaaf81b8800 nid=0x447b in Object.wait() [0x00002aaaf6a22000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab5556d50> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x00002aaab5556d50> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Low Memory Detector" daemon prio=10 tid=0x00000000006a4000 nid=0x4479 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=10 tid=0x00000000006a1000 nid=0x4477 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=10 tid=0x000000000069d000 nid=0x4476 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=10 tid=0x000000000069b000 nid=0x4465 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=10 tid=0x0000000000678800 nid=0x4464 in Object.wait() [0x00002aaaf61d6000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=10 tid=0x0000000000676800 nid=0x4463 in Object.wait() [0x00002aaaf60d5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "VM Thread" prio=10 tid=0x0000000000670000 nid=0x4462 runnable "GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000061e000 nid=0x445e runnable "GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000000620000 nid=0x445f runnable "GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000000622000 nid=0x4460 runnable "GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000000623800 nid=0x4461 runnable "VM Periodic Task Thread" prio=10 tid=0x00000000006a6800 nid=0x447a waiting on condition JNI global references: 797 Heap PSYoungGen total 162944K, used 48388K [0x00002aaadff40000, 0x00002aaaf2ab0000, 0x00002aaaf5490000) eden space 102784K, 47% used [0x00002aaadff40000,0x00002aaae2e81170,0x00002aaae63a0000) from space 60160K, 0% used [0x00002aaaeb850000,0x00002aaaeb850000,0x00002aaaef310000) to space 86720K, 0% used [0x00002aaae63a0000,0x00002aaae63a0000,0x00002aaaeb850000) PSOldGen total 699072K, used 699072K [0x00002aaab5490000, 0x00002aaadff40000, 0x00002aaadff40000) object space 699072K, 100% used [0x00002aaab5490000,0x00002aaadff40000,0x00002aaadff40000) PSPermGen total 21248K, used 9252K [0x00002aaab0090000, 0x00002aaab1550000, 0x00002aaab5490000) object space 21248K, 43% used [0x00002aaab0090000,0x00002aaab09993e8,0x00002aaab1550000)

    Read the article

  • trouble with boost::filesystem::wrecursive_directory_iterator

    - by Dogmatixed
    I'm trying to write a program to help me manage my iTunes library, including removing duplicates and cataloging certain things. At this point I'm still just trying to get it to walk through all the folders, and have run into a problem: I have a small amount of Japanese music, where the artist and/or album is written in Japanese characters. Because of how iTunes arranges things in its library the directories contain these characters. "shouldn't be a problem, though." I thought, because the boost::filesystem library has a wide character version of its recursive iterator. but when I actually try to use it, it seems to completely stop when it hits the first Japanese char. complete stop as in it doesn't finish printing the line, no carriage return or anything. now, I'm still pretty new to programming, so I'm assuming it's my mistake, anyone know why this is happening? here's what I think is the relevant code: fs::wrecursive_directory_iterator end_it; int i; try { for(fs::wrecursive_directory_iterator rec_it(full_path); rec_it != end_it; ++rec_it) { for(i = 0; i < rec_it.level(); i++) { out << "\t"; } out << rec_it->string() << std::endl; } } catch(std::exception e) { out << "something went wrong: " << e.what(); } and from my output file, minus some of the path: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/ finally, what I expect: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/??? /Test Libs/Lib1/Bring it on /Test Libs/Lib1/04 Bring it on.mp3 any thoughts? Thanks.

    Read the article

  • linking to boost regex in gcc

    - by rboorgapally
    i am trying to compile my program which uses regex on linux. I built the boost library in the libs/regex/build by typing make -fgcc.mak which created a directory gcc which contains the following four files boost_regex-gcc-1_35 boost_regex-gcc-d-1_35 libboost_regex-gcc-1_35.a libboost_regex-gcc-d-1_35.a Now I want to use regex from my program which is in some arbitrary directory. I #included boost/regex.hpp I got the error which stated that regex.hpp is not found. Then I gave the -I option in the g++ compiler. I didn't get that error. But I get the following error undefined reference to `boost::re_detail::perl_matcher<__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<boost::sub_match<__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >, boost::regex_traits<char, boost::cpp_regex_traits<char> > >::construct_init(boost::basic_regex<char, boost::regex_traits<char, boost::cpp_regex_traits<char> > > const&, boost::regex_constants::_match_flags)' I googled and found that I need to somehow link one of the above 4 libraries to my program. How can I do it. Which one should I link and why?

    Read the article

  • How to use boost normal distribution classes?

    - by David Alfonso
    Hi all, I'm trying to use boost::normal_distribution in order to generate a normal distribution with mean 0 and sigma 1. The following code uses boost normal classes. Am I using them correctly? #include <boost/random.hpp> #include <boost/random/normal_distribution.hpp> int main() { boost::mt19937 rng; // I don't seed it on purpouse (it's not relevant) boost::normal_distribution<> nd(0.0, 1.0); boost::variate_generator<boost::mt19937&, boost::normal_distribution<> > var_nor(rng, nd); int i = 0; for (; i < 10; ++i) { double d = var_nor(); std::cout << d << std::endl; } } The result on my machine is: 0.213436 -0.49558 1.57538 -1.0592 1.83927 1.88577 0.604675 -0.365983 -0.578264 -0.634376 As you can see all values are not between -1 and 1. Thank you all in advance!

    Read the article

  • [C++][Boost] Acceptor and Problems with Async_Accept

    - by bobber205
    See code. :P I am able to receive new connections before async_accept() has been called. My delegate function is also never called so I can't manage any connections I receive, rendering the new connections useless. ;) So here's my question. Is there a way to prevent the Boost ASIO acceptor from getting new connections on its own and only getting connections from async_accept()? Thanks! bool AlexSocket::StartListening(int port) { bool didStart = false; if (!this->listening) { //try to listen acceptor = new tcp::acceptor(this->myService); boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), port); acceptor->open(endpoint.protocol()); acceptor->set_option(boost::asio::ip::tcp::acceptor::reuse_address(true)); acceptor->bind(endpoint); //CAN GET NEW CONNECTIONS HERE (before async_accept is called) acceptor->listen(); didStart = true; //probably change? tcp::socket* tempNewSocket = new tcp::socket(this->myService); acceptor->async_accept(*tempNewSocket, boost::bind(&AlexSocket::NewConnection, this, tempNewSocket, boost::asio::placeholders::error) ); } else //already started! return false; this->listening = didStart; return didStart; } //this function is never called :( void AlexSocket::NewConnection(tcp::socket* s, const boost::system::error_code& error) { cout << "New Connection Made" << endl; //Start new accept async tcp::socket* tempNewSocket = new tcp::socket(this->myService); acceptor->async_accept(*tempNewSocket, boost::bind(&AlexSocket::NewConnection, this, tempNewSocket, boost::asio::placeholders::error) ); }

    Read the article

  • Why don't C++ Game Developers use the boost library?

    - by James
    So if you spend any time viewing / answering questions over on Stack Overflow under the C++ tag, you will quickly notice that just about everybody uses the boost library; some would even say that if you aren't using it, you're not writing "real' C++ (I disagree, but that's not the point). But then there is the game industry, which is well known for using C++ and not using boost. I can't help but wonder why that is. I don't care to use boost because I write games (now) as a hobby, and part of that hobby is implementing what I need when I am able to and using off-the-shelf libraries when I can't. But that is just me. Why don't game developers, in general, use the boost library? Is it performance or memory concerns? Style? Something Else? I was about to ask this on stack overflow, but I figured the question is better asked here. EDIT : I realize I can't speak for all game programmers and I haven't seen all game projects, so I can't say game developers never use boost; this is simply my experience. Allow me to edit my question to also ask, if you do use boost, why did you choose to use it?

    Read the article

  • SIlverlight 4RC threading - can a new Thread return the UI Thread

    - by Darko Z
    Hi all, Let's say I have a situation in Silverlight where there is a background thread (guaranteed to NOT be the UI thread) doing some work and it needs to create a new thread. Something like this: //running in a background thread Thread t = new Thread(new ThreadStart(delegate{}); t.Start(); Lets also say that the UI thread at this particular time is just hanging around doing nothing. Keeping in mind that I am not that knowledgeable about the Silverlight threading model, is there any danger of the new Thread() call giving me the UI thread? The motivation or what I am trying to achieve is not important - I do not want modification to the existing code. I just want to know if there is a possibility of getting the UI thread back unexpectedly. Cheers

    Read the article

  • Unlocking a mutex from a different thread (C++)

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect.

    Read the article

  • Thread sleep and thread join.

    - by Dhruv Gairola
    hi guys, if i put a thread to sleep in a loop, netbeans gives me a caution saying Invoking Thread.sleep in loop can cause performance problems. However, if i were to replace the sleep with join, no such caution is given. Both versions compile and work fine tho. My code is below (check the last few lines for "Thread.sleep() vs t.join()"). public class Test{ //Display a message, preceded by the name of the current thread static void threadMessage(String message) { String threadName = Thread.currentThread().getName(); System.out.format("%s: %s%n", threadName, message); } private static class MessageLoop implements Runnable { public void run() { String importantInfo[] = { "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" }; try { for (int i = 0; i < importantInfo.length; i++) { //Pause for 4 seconds Thread.sleep(4000); //Print a message threadMessage(importantInfo[i]); } } catch (InterruptedException e) { threadMessage("I wasn't done!"); } } } public static void main(String args[]) throws InterruptedException { //Delay, in milliseconds before we interrupt MessageLoop //thread (default one hour). long patience = 1000 * 60 * 60; //If command line argument present, gives patience in seconds. if (args.length > 0) { try { patience = Long.parseLong(args[0]) * 1000; } catch (NumberFormatException e) { System.err.println("Argument must be an integer."); System.exit(1); } } threadMessage("Starting MessageLoop thread"); long startTime = System.currentTimeMillis(); Thread t = new Thread(new MessageLoop()); t.start(); threadMessage("Waiting for MessageLoop thread to finish"); //loop until MessageLoop thread exits while (t.isAlive()) { threadMessage("Still waiting..."); //Wait maximum of 1 second for MessageLoop thread to //finish. /*******LOOK HERE**********************/ Thread.sleep(1000);//issues caution unlike t.join(1000) /**************************************/ if (((System.currentTimeMillis() - startTime) > patience) && t.isAlive()) { threadMessage("Tired of waiting!"); t.interrupt(); //Shouldn't be long now -- wait indefinitely t.join(); } } threadMessage("Finally!"); } } As i understand it, join waits for the other thread to complete, but in this case, arent both sleep and join doing the same thing? Then why does netbeans throw the caution?

    Read the article

  • Erasing and modifying elements in Boost MultiIndex Container

    - by Sarah
    I'm trying to use a Boost MultiIndex container in my simulation. My knowledge of C++ syntax is very weak, and I'm concerned I'm not properly removing an element from the container or deleting it from memory. I also need to modify elements, and I was hoping to confirm the syntax and basic philosophy here too. // main.cpp ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> #include <boost/shared_ptr.hpp> ... #include "Host.h" // class Host, all members private, using get fxns to access using boost::multi_index_container; using namespace boost::multi_index; typedef multi_index_container< boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> > // ordered_non_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,&Host::getAge) > > // end indexed_by > HostContainer; typedef HostContainer::nth_index<0>::type HostsByID; int main() { ... HostContainer allHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); allHosts.insert( boost::shared_ptr<Host>(newHostPtr) ); // allHosts gets filled up int randomHostID = 4; int newAge = 50; modifyHost( randomHostID, allHosts, newAge ); killHost( randomHostID, allHosts ); } void killHost( int id, HostContainer & hmap ){ HostsByID::iterator it = hmap.find( id ); cout << "Found host id " << (*it)->getID() << "Attempting to kill. hmap.size() before is " << hmap.size() << " and "; hmap.erase( it ); // Is this really erasing (freeing from mem) the underlying Host object? cout << hmap.size() << " after." << endl; } void modifyHost( int id, HostContainer & hmap, int newAge ){ HostsByID::iterator it = hmap.find( id ); (*it) -> setAge( newAge ); // Not actually the "modify" function for MultiIndex... } My questions are In the MultiIndex container allHosts of shared_ptrs to Host objects, is calling allHosts.erase( it ) on an iterator to the object's shared_ptr enough to delete the object permanently and free it from memory? It appears to be removing the shared_ptr from the container. The allhosts container currently has one functioning index that relies on the host's ID. If I introduce an ordered second index that calls on a member function (Host::getAge()), where the age changes over the course of the simulation, is the index always going to be updated when I refer to it? What is the difference between using the MultiIndex's modify to modify the age of the underlying object versus the approach I show above? I'm vaguely confused about what is assumed/required to be constant in MultiIndex. Thanks in advance. Update Here's my attempt to get the modify syntax working, based on what I see in a related Boost example. struct update_age { update_age():(){} // have no idea what this really does... elicits error void operator() (boost::shared_ptr<Host> ptr) { ptr->incrementAge(); // incrementAge() is a member function of class Host } }; and then in modifyHost, I'd have hmap.modify(it,update_age). Even if by some miracle this turns out to be right, I'd love some kind of explanation of what's going on.

    Read the article

  • Modelling boost::Lockable with semaphore rather than mutex (previously titled: Unlocking a mutex fr

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect. EDIT: The kind of behavior I want is basically as follows. I have an object, and it needs to be locked whenever it is modified. I want to lock the object from one thread, and do some work on it. Then I want to keep the object locked while I tell another worker thread to complete the work. So the first thread can go on and do something else while the worker thread finishes up. When the worker thread gets done, it unlocks the mutex. And I want the transition to be seemless so nobody else can get the mutex lock in between when thread 1 starts the work and thread 2 completes it. Something like inter_thread_mutex seems like it would work, and it would also allow the program to interact with it as if it were an ordinary mutex. So it seems like a clean solution. If there's a better solution, I'd be happy to hear that also. EDIT AGAIN: The reason I need locks to begin with is that there are multiple master threads, and the locks are there to prevent them from accessing shared objects concurrently in invalid ways. So the code already uses loop-level lock-free sequencing of operations at the master thread level. Also, in the original implementation, there were no worker threads, and the mutexes were ordinary kosher mutexes. The inter_thread_thingy came up as an optimization, primarily to improve response time. In many cases, it was sufficient to guarantee that the "first part" of operation A, occurs before the "first part" of operation B. As a dumb example, say I punch object 1 and give it a black eye. Then I tell object 1 to change it's internal structure to reflect all the tissue damage. I don't want to wait around for the tissue damage before I move on to punch object 2. However, I do want the tissue damage to occur as part of the same operation; for example, in the interim, I don't want any other thread to reconfigure the object in such a way that would make tissue damage an invalid operation. (yes, this example is imperfect in many ways, and no I'm not working on a game) So we made the change to a model where ownership of an object can be passed to a worker thread to complete an operation, and it actually works quite nicely; each master thread is able to get a lot more operations done because it doesn't need to wait for them all to complete. And, since the event sequencing at the master thread level is still loop-based, it is easy to write high-level master-thread operations, as they can be based on the assumption that an operation is complete when the corresponding function call returns. Finally, I thought it would be nice to use inter_thread mutex/semaphore thingies using RAII with boost locks to encapsulate the necessary synchronization that is required to make the whole thing work.

    Read the article

  • How to use boost::transform_iterator to iterate over modifed std::map values?

    - by Frank
    I have an std::map, and I would like to define an iterator that returns modified values. Typically, a std::map<int,double>::iterator iterates over std::pair<int,double>, and I would like the same behavior, just the double value is multiplied by a constant. I tried it with boost::transform_iterator, but it doesn't compile: #include <map> #include <boost/iterator/transform_iterator.hpp> #include <boost/functional.hpp> typedef std::map<int,double> Map; Map m; m[100] = 2.24; typedef boost::binder2nd< std::multiplies<double> > Function; typedef boost::transform_iterator<Function, Map::value_type*> MultiplyIter; MultiplyIter begin = boost::make_transform_iterator(m.begin(), Function(std::multiplies<double>(), 4)); // now want to similarly create an end iterator // and then iterate over the modified map The error is: error: conversion from 'boost ::transform_iterator< boost::binder2nd<multiplies<double> >, gen_map<int, double>::iterator , boost::use_default, boost::use_default >' to non-scalar type 'boost::transform_iterator< boost::binder2nd<multiplies<double> >, pair<const int, double> * , boost::use_default, boost::use_default >' requested What is gen_map and do I really need it? I adapted the transform_iterator tutorial code from here to write this code ...

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >