Search Results

Search found 9705 results on 389 pages for 'boost thread'.

Page 22/389 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Boost.Test: Looking for a working non-Trivial Test Suite Example / Tutorial

    - by Robert S. Barnes
    The Boost.Test documentation and examples don't really seem to contain any non-trivial examples and so far the two tutorials I've found here and here while helpful are both fairly basic. I would like to have a master test suite for the entire project, while maintaining per module suites of unit tests and fixtures that can be run independently. I'll also be using a mock server to test various networking edge cases. I'm on Ubuntu 8.04, but I'll take any example Linux or Windows since I'm writing my own makefiles anyways.

    Read the article

  • Pretty printing boost::unordered_map on gdb

    - by scooterman
    Hi all, recently I've started to use the excellent boost::unordered_map on my system, but got one drawback: I couldn't figure how to inspect its contents. Printing it on gdb gives me a table_ and a buckets_, but haven't found where are the items. Anyone has a clue about this?

    Read the article

  • Why isn't the boost::shared_ptr -> operator inlined?

    - by Alan
    Since boost::shared_ptr could be called very frequently and simply returns a pointer, isn't the -> operator a good candidate for being inlined? T * operator-> () const // never throws { BOOST_ASSERT(px != 0); return px; } Would a good compiler automatically inline this anyway? Should I lose any sleep over this? :-)

    Read the article

  • Boost multi_index ordered_unique Median Value

    - by Autopulated
    I would like to quickly retrieve the median value from a boost multi_index container with an ordered_unique index, however the index iterators aren't random access (I don't understand why they can't be, though this is consistent with std::set...). Is there a faster/neater way to do this other than incrementing an iterator container.size() / 2 times?

    Read the article

  • Direct boost serialization to char array

    - by scooterman
    Hi all, Boost serialization doc's assert that the way to serialize/deserialize items is using a binary/text archive with a stream on the underlying structure. This works fine if I wan't to use the serialized data as an std::string, but my intention is to convert it directly to a char* buffer. How can I achieve this without creating a temporary string?

    Read the article

  • C++ boost wave, scoped macro

    - by aaa
    hello. Is it possible to have scoped macros using custom defined macros through boost wave? I know it should a possible with C++0x however I am working with regular C++. If it is possible, can you provide link or reference how to accomplish this? Thanks

    Read the article

  • any stl/boost functors to call operator()

    - by Voivoid
    template <typename T> struct Foo { void operator()(T& t) { t(); } }; Is there any standart or boost functor with the similar implementation? I need it to iterate over container of functors: std::for_each(beginIter, endIter, Foo<Bar>()); Or maybe there are other way to do it?

    Read the article

  • Mixing C and C++, raw pointers and (boost) shared pointers

    - by oompahloompah
    I am working in C++ with some legacy C code. I have a data structure that (during initialisation), makes a copy of the structure pointed to a ptr passed to its initialisation pointer. Here is a simplification of what I am trying to do - hopefully, no important detail has been lost in the "simplification": /* C code */ typedef struct MyData { double * elems; unsigned int len; }; int NEW_mydata(MyData* data, unsigned int len) { // no error checking data->elems = (double *)calloc(len, sizeof(double)); return 0; } typedef struct Foo { MyData data data_; }; void InitFoo(Foo * foo, const MyData * the_data) { //alloc mem etc ... then assign the STRUCTURE foo.data_ = *thedata ; } C++ code ------------- typedef boost::shared_ptr<MyData> MyDataPtr; typedef std::map<std::string, MyDataPtr> Datamap; class FooWrapper { public: FooWrapper(const std::string& key) { MyDataPtr mdp = dmap[key]; InitFoo(&m_foo, const_cast<MyData*>((*mdp.get()))); } ~FooWrapper(); double get_element(unsigned int index ) const { return m_foo.elems[index]; } private: // non copyable, non-assignable FooWrapper(const FooWrapper&); FooWrapper& operator= (const FooWrapper&); Foo m_foo; }; int main(int argc, char *argv[]) { MyData data1, data2; Datamap dmap; NEW_mydata(&data1, 10); data1->elems[0] = static_cast<double>(22/7); NEW_mydata(&data2, 42); data2->elems[0] = static_cast<double>(13/21); boost::shared_ptr d1(&data1), d2(&data2); dmap["data1"] = d1; dmap["data2"] = d2; FooWrapper fw("data1"); //expect 22/7, get something else (random number?) double ret fw.get_element(0); } Essentially, what I want to know is this: Is there any reason why the data retrieved from the map is different from the one stored in the map?

    Read the article

  • Boost::asio::endpoint::size() and resize()

    - by p00ya
    hi. I was reading the boost endpoint documentation and saw size() and resize() member funcs. the documentation says: Gets the underlying size of the endpoint in the native type. what does this size represent and where can it be used/resized ? thanks.

    Read the article

  • Custom InputIterator for Boost graph (BGL)

    - by Shadow
    Hi, I have a graph with custom properties to the vertices and edges. I now want to create a copy of this graph, but I don't want the vertices to be as complex as in the original. By this I mean that it would suffice that the vertices have the same indices (vertex_index_t) as they do in the original graph. Instead of doing the copying by hand I wanted to use the copy-functionality of boost::adjacency_list (s. http://www.boost.org/doc/libs/1_37_0/libs/graph/doc/adjacency_list.html): template <class EdgeIterator> adjacency_list(EdgeIterator first, EdgeIterator last, vertices_size_type n, edges_size_type m = 0, const GraphProperty& p = GraphProperty()) The description there says: The EdgeIterator must be a model of InputIterator. The value type of the EdgeIterator must be a std::pair, where the type in the pair is an integer type. The integers will correspond to vertices, and they must all fall in the range of [0, n). Unfortunately I have to admit that I don't quite get it how to define an EdgeIterator that is a model of InputIterator. Here's what I've succeded so far: template< class EdgeIterator, class Edge > class MyEdgeIterator// : public input_iterator< std::pair<int, int> > { public: MyEdgeIterator() {}; MyEdgeIterator(EdgeIterator& rhs) : actual_edge_it_(rhs) {}; MyEdgeIterator(const MyEdgeIterator& to_copy) {}; bool operator==(const MyEdgeIterator& to_compare) { return actual_edge_it_ == to_compare.actual_edge_it_; } bool operator!=(const MyEdgeIterator& to_compare) { return !(*this == to_compare); } Edge operator*() const { return *actual_edge_it_; } const MyEdgeIterator* operator->() const; MyEdgeIterator& operator ++() { ++actual_edge_it_; return *this; } MyEdgeIterator operator ++(int) { MyEdgeIterator<EdgeIterator, Edge> tmp = *this; ++*this; return tmp; } private: EdgeIterator& actual_edge_it_; } However, this doesn't work as it is supposed to and I ran out of clues. So, how do I define the appropriate InputIterator?

    Read the article

  • Identify high CPU consumed thread for Java app

    - by Vincent Ma
    Following java code to emulate busy and Idle thread and start it. import java.util.concurrent.*;import java.lang.*; public class ThreadTest {    public static void main(String[] args) {        new Thread(new Idle(), "Idle").start();        new Thread(new Busy(), "Busy").start();    }}class Idle implements Runnable {    @Override    public void run() {        try {            TimeUnit.HOURS.sleep(1);        } catch (InterruptedException e) {        }    }}class Busy implements Runnable {    @Override    public void run() {        while(true) {            "Test".matches("T.*");        }    }} Using Processor Explorer to get this busy java processor and get Thread id it cost lots of CPU see the following screenshot: Cover to 4044 to Hexadecimal is oxfcc. Using VistulVM to dump thread and get that thread. see the following screenshot In Linux you can use  top -H to get Thread information. That it! Any question let me know. Thanks

    Read the article

  • How to pass user-defined structs using boost mpi

    - by lava
    I am trying to send a user-defined structure named ABC using boost::mpi::send () call. The given struct contains a vector "data" whose size is determined at runtime. Objects of struct ABC are sent by master to slaves. But the slaves need to know the size of vector "data" so that the sufficient buffer is available on the slave to receive this data. I can work around it by sending the size first and initialize sufficient buffer on the slave before receiving the objects of struct ABC. But that defeats the whole purpose of using STL containers. Does anyone know of a better way to do handle this ? Any suggestions are greatly appreciated. Here is a sample code that describes the intent of my program. This code fails at runtime due to above mentioned reason. struct ABC { double cur_stock_price; double strike_price; double risk_free_rate; double option_price; std::vector <char> data; }; namespace boost { namespace serialization { template<class Archive> void serialize (Archive &ar, struct ABC &abc, unsigned int version) { ar & abc.cur_stock_price; ar & abc.strike_price; ar & abc.risk_free_rate; ar & abc.option_price; ar & bopr.data; } } } BOOST_IS_MPI_DATATYPE (ABC); int main(int argc, char* argv[]) { mpi::environment env (argc, argv); mpi::communicator world; if (world.rank () == 0) { ABC abc_obj; abc.cur_stock_price = 1.0; abc.strike_price = 5.0; abc.risk_free_rate = 2.5; abc.option_price = 3.0; abc_obj.data.push_back ('a'); abc_obj.data.push_back ('b'); world.send ( 1, ANY_TAG, abc_obj;); std::cout << "Rank 0 OK!" << std::endl; } else if (world.rank () == 1) { ABC abc_obj; // Fails here because abc_obj is not big enough world.recv (0,ANY_TAG, abc_obj;); std::cout << "Rank 1 OK!" << std::endl; for (int i = 0; i < abc_obj;.data.size(); i++) std::cout << i << "=" << abc_obj.data[i] << std::endl; } MPI_Finalize(); return 0; }

    Read the article

  • How to implement tokenizer.rbegin() and rend() for boost::tokenizer ?

    - by Chan
    Hello everyone, I'm playing around with boost::tokenizer however I realize that it does not support rbegin() and rend(). I would like to ask how can I add these two functions to the existing class? This is from the boost site: #include <iostream> #include <string> #include <boost/tokenizer.hpp> using namespace std; using namespace boost; int main() { string str( "12/12/1986" ); typedef boost::tokenizer<boost::char_separator<char>> tokenizer; boost::char_separator<char> sep( "/" ); tokenizer tokens( str, sep ); cout << *tokens.begin() << endl; // cout << *tokens.rbegin() << endl; How could I implement this? return 0; }

    Read the article

  • NSInvocationOperation and main thread

    - by kpower
    Imagine that I have a view with some UIKit object as its subview (for example, UIActivityIndicatorView - this doesn't matter). This view also has a selector, called doSomething, which somehow manages UIKit object (in our example it can start or stop indicator view). I create NSInvocationOperation (from view's code parts) with initWithTarget:self selector:@selector(doSomething) object:nil. Then add it to NSOperationQueue. And all works fine. How?! It should be a new thread and non-thread-safe UIKit object! Why no error found (and no crash happened)?

    Read the article

  • How to use thread in Django

    - by zomboid
    I want to check users' subscribed dates for certain period. And send mail to users whose subscription is finishing (ex. reminds two days). I think the best way is using thread and timer to check dates. But I have no idea how to call this function. I don't want to make a separate program or shell. I want to combine this procedure to my django code. I tried to call this function in my settings.py file. But it seems it is not a good idea. It calls the fucntion and creates thread everytime i imported settings.

    Read the article

  • Thread vs ThreadPool - .Net 2.0

    - by NLV
    Hello I'm not able to understand the difference between Thread vs ThreadPool. Consider i've to manipulate 50,000 records using threads. In case of threads i need to either predefine no of threads or no of records per threads. Either of them has to be constant. In case of threadpool we dont need to set any of them theoretically. But practically we need to assign the number of records per thread, because the no of threads may grow extremely large if the input no of records is huge. Any insights on this?

    Read the article

  • Adding row to DataGridView from Thread

    - by she hates me
    Hello, I would like to add rows to DataGridView from two seperate threads. I tried something with delegates and BeginInvoke but doesn't work. Here is my row updater function which is called from another function in a thread. public delegate void GRIDLOGDelegate(string ulke, string url, string ip = ""); private void GRIDLOG(string ulke, string url, string ip = "") { if (this.InvokeRequired) { // Pass the same function to BeginInvoke, // but the call would come on the correct // thread and InvokeRequired will be false. object[] myArray = new object[3]; myArray[0] = ulke; myArray[1] = url; myArray[2] = ip; this.BeginInvoke(new GRIDLOGDelegate(GRIDLOG), new object[] { myArray }); return; } //Yeni bir satir daha olustur string[] newRow = new string[] { ulke, url, ip }; dgLogGrid.Rows.Add(newRow); }

    Read the article

  • localtime_r supposed to be thread safe, but causing errors in Valgrind DRD

    - by Nik
    I searched google as much as I could but I couldn't find any good answers to this. localtime_r is supposed to be a thread-safe function for getting the system time. However, when checking my application with Valgrind --tool=drd, it consistantly tells me that there is a data race condition on this function. Are the common search results lying to me, or am I just missing something? It doesn't seem efficient to surround each localtime_r call with a mutex, especially if it is supposed to by thread safe in the first place. here is how i'm using it: timeval handlerTime; gettimeofday(&handlerTime,NULL); tm handlerTm; localtime_r(&handlerTime.tv_sec,&handlerTm); Any ideas?

    Read the article

  • [WPF] The calling thread cannot access this object because a different thread owns it.

    - by zunyite
    Why I can't create CroppedBitmap in the following code ? I got an exception : The calling thread cannot access this object because a different thread owns it. public MainWindow() { InitializeComponent(); ThreadPool.QueueUserWorkItem((o) => { //load a large image file var bf = BitmapFrame.Create( new Uri("D:\\1172735642.jpg"), BitmapCreateOptions.DelayCreation | BitmapCreateOptions.IgnoreColorProfile, BitmapCacheOption.None); bf.Freeze(); Dispatcher.BeginInvoke( new Action(() => { CroppedBitmap cb = new CroppedBitmap(bf, new Int32Rect(1,1,5,5)); cb.Freeze(); //set Image's source to cb.... }), System.Windows.Threading.DispatcherPriority.ApplicationIdle); } ); }

    Read the article

  • Java Thread - Synchronization issue

    - by Yatendra Goel
    From Sun's tutorial: Synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, all reads or writes to that object's variables are done through synchronized methods. (An important exception: final fields, which cannot be modified after the object is constructed, can be safely read through non-synchronized methods, once the object is constructed) This strategy is effective, but can present problems with liveness, as we'll see later in this lesson. Q1. Is the above statements mean that if an object of a class is going to be shared among multiple threads, then all instance methods of that class (except getters of final fields) should be made synchronized, since instance methods process instance variables?

    Read the article

  • Python Threading, loading one thread after another

    - by Michael
    Hi, I'm working on a media player and am able to load in a single .wav and play it. As seen in the code below. foo = wx.FileDialog(self, message="Open a .wav file...", defaultDir=os.getcwd(), defaultFile="", style=wx.FD_MULTIPLE) foo.ShowModal() queue = foo.GetPaths() self.playing_thread = threading.Thread(target=self.playFile, args=(queue[0], 'msg')) self.playing_thread.start() But the problem is, when I try to make the above code into a loop for multiple .wav files. Such that while playing_thread.isActive == True, create and .start() the thread. Then if .isActive == False, pop queue[0] and load the next .wav file. Problem is, my UI will lock up and I'll have to terminate the program. Any ideas would be appreciated.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >