Search Results

Search found 1764 results on 71 pages for 'boost interprocess'.

Page 4/71 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to use/manipulate return value from nested boost::bind

    - by JQ
    I have two functions: 1. A & DataSource(); 2. void DataConsumer( A * ); What I want to achieve: Using one statement to assemble them into one functor. I have tried: 1. boost::function< void() func( boost::bind( DataConsumer, & boost::bind( DataSource ) ) ); certainly it didn't work, compiler says it can not convert 'boost::_bi::bind_t ' to 'A *' 2. boost::function< void() func( boost::bind( DataConsumer, boost::addressof( boost::bind( DataSource ) ) )); compiler says cannot convert parameter 1 from 'boost::_bi::bind_t' to 'A &' Question: how to use return value from the nested boost::bind ? or if you want to use boost::lambda::bind.

    Read the article

  • Boost::Asio : io_service.run() vs poll() or how do I integrate boost::asio in mainloop

    - by user300713
    Hi, I am currently trying to use boost::asio for some simple tcp networking for the first time, and I allready came across something I am not really sure how to deal with. As far as I understand io_service.run() method is basically a loop which runs until there is nothing more left to do, which means it will run until I release my little server object. Since I allready got some sort of mainloop set up, I would rather like tp update the networking loop manually from there just for the sake of simplicity, and I think io_service.poll() would do what I want, sort of like this: void myApplication::update() { myIoService.poll(); //do other stuff } This seems to work, but I am still wondering if there is a drawback from this method since that does not seem to be the common way to deal with boost::asios io services. Is this a valid approach or should I rather use io_service.run() in a non blocking extra thread?

    Read the article

  • Boost.Thread throws bad_alloc exception in VS2010

    - by the_drow
    Upon including <boost/thread.hpp> I get this exception: First-chance exception at 0x7c812afb in CSF.exe: Microsoft C++ exception: boost::exception_detail::clone_impl<boost::exception_detail::bad_alloc_> at memory location 0x0012fc3c.. First-chance exception at 0x7c812afb in CSF.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.. I can't catch it, breaking at the memory location brings me to kernel32.dll and at this point I cannot say what's going on but it appears that the exception is thrown after the program ends and VS is capable of catching it. The testcase: #include <boost/thread.hpp> int main() { return 0; }

    Read the article

  • Boost.Python wrapping hierarchies avoiding diamond inheritance

    - by stbuton
    I'm having some trouble seeing what the best way to wrap a series of classes with Boost.Python while avoiding messy inheritance problems. Say I have the classes A, B, and C with the following structure: struct A { virtual void foo(); virtual void bar(); virtual void baz(); }; struct B : public A { virtual void quux(); }; struct C : public A { virtual void foobar(); }; I want to wrap all classes A, B, and C such that they are extendable from Python. The normal method for accomplishing this would be along the lines of: struct A_Wrapper : public A, boost::python::wrapper<A> { //dispatch logic for virtual functions }; Now for classes B and C which extend from A I would like to be able to inherit and share the wrapping implementation for A. So I'd like to be able to do something along the lines of: struct B_Wrapper : public B, public A_Wrapper, public boost::python::wrapper<B> { //dispatch logic specific for B }; struct C_Wrapper : public C, public A_Wrapper, public boost::python::wrapper<C> { //dispatch logic specific for C } However, it seems like that would introduce all manner of nastiness with the double inheritance of the boost wrapper base and the double inheritance of A in the B_Wrapper and C_Wrapper objects. Is there a common way that this instance is solved that I'm missing? thanks.

    Read the article

  • Iterator for boost::variant

    - by Ivan
    Hy there, I'm trying to adapt an existing code to boost::variant. The idea is to use boost::variant for a heterogeneous vector. The problem is that the rest of the code use iterators to access the elements of the vector. Is there a way to use the boost::variant with iterators? I've tried typedef boost::variant<Foo, Bar> Variant; std::vector<Variant> bag; std::vector<Variant>::iterator it; for(it= bag.begin(); it != bag.end(); ++it){ cout<<(*it)<<endl; } But it didn't work.

    Read the article

  • Using boost::iterator

    - by Neil G
    I wrote a sparse vector class (see #1, #2.) I would like to provide two kinds of iterators: The first set, the regular iterators, can point any element, whether set or unset. If they are read from, they return either the set value or value_type(), if they are written to, they create the element and return the lvalue reference. Thus, they are: Random Access Traversal Iterator and Readable and Writable Iterator The second set, the sparse iterators, iterate over only the set elements. Since they don't need to lazily create elements that are written to, they are: Random Access Traversal Iterator and Readable and Writable and Lvalue Iterator I also need const versions of both, which are not writable. I can fill in the blanks, but not sure how to use boost::iterator_adaptor to start out. Here's what I have so far: template<typename T> class sparse_vector { public: typedef size_t size_type; typedef T value_type; private: typedef T& true_reference; typedef const T* const_pointer; typedef sparse_vector<T> self_type; struct ElementType { ElementType(size_type i, T const& t): index(i), value(t) {} ElementType(size_type i, T&& t): index(i), value(t) {} ElementType(size_type i): index(i) {} ElementType(ElementType const&) = default; size_type index; value_type value; }; typedef vector<ElementType> array_type; public: typedef T* pointer; typedef T& reference; typedef const T& const_reference; private: size_type size_; mutable typename array_type::size_type sorted_filled_; mutable array_type data_; // lots of code for various algorithms... public: class sparse_iterator : public boost::iterator_adaptor< sparse_iterator // Derived , array_type::iterator // Base (the internal array) (this paramater does not compile! -- says expected a type, got 'std::vector::iterator'???) , boost::use_default // Value , boost::random_access_traversal_tag? // CategoryOrTraversal > class iterator_proxy { ??? }; class iterator : public boost::iterator_facade< iterator // Derived , ????? // Base , ????? // Value , boost::?????? // CategoryOrTraversal > { }; };

    Read the article

  • buffer overflow with boost::program_options

    - by f4
    Hello, I have a problem using boost:program_options this simple program, copy-pasted from boosts' documentation : #include <boost/program_options.hpp> int main( int argc, char** argv ) { namespace po = boost::program_options; po::options_description desc("Allowed options"); desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; return 0; } fails with a buffer overflow. I have activated the "buffer security switch", and when I run it I get an "unknown exception (0xc0000409)" when I step over the line desc.add_options()... I use Visual Studio 2005 and boost 1.43.0. By the way it does run if I deactivate the switch but I don't feel comfortable doing so... unless it's possible to deactivate it locally. So do you have a solution to this problem? EDIT I found the problem I was linking against libboost_program_options-vc80-mt.lib which wasn't the good library.

    Read the article

  • Scalability of Boost.Asio

    - by samm
    I'm curious how far others have pushed Boost.Asio in terms of scalability. I am writing an application that may use close to 1000 socket objects, a handful of acceptor objects, and many thousand timer objects. I've configured it such that there's a thread pool invoking io_service::run and use strands in the appropriate places to ensure my handlers do not stomp on each other. My platform is Red Hat Enterprise Linux with Boost 1.39, though I'm not opposed to upgrading to a more recent version of boost.

    Read the article

  • How to pass parameters to manage_shared_memory.construct() in Boost.Interprocess

    - by recipriversexclusion
    I've stared at the Boost.Interprocess documentation for hours but still haven't been able to figure this out. In the doc, they have an example of creating a vector in shared memory like so: //Define an STL compatible allocator of ints that allocates from the managed_shared_memory. //This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef vector<int, ShmemAllocator> MyVector; int main(int argc, char *argv[]) { //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a vector named "MyVector" in shared memory with argument alloc_inst MyVector *myvector = segment.construct<MyVector>("MyVector")(alloc_inst); Now, I understand this. What I'm stuck is how to pass a second parameter to segment.construct() to specify the number of elements. The interprocess document gives the prototype for construct() as MyType *ptr = managed_memory_segment.construct<MyType>("Name") (par1, par2...); but when I try MyVector *myvector = segment.construct<MyVector>("MyVector")(100, alloc_inst); I get compilation errors. My questions are: Who actually gets passed the parameters par1, par2 from segment.construct, the constructor of the object, e.g. vector? My understanding is that the template allocator parameter is being passed. Is that correct? How can I add another parameter, in addition to alloc_inst that is required by the constructor of the object being created in shared memory? There's very little information other than the terse Boost docs on this.

    Read the article

  • Boost timed_wait leap seconds problem

    - by Isac
    Hi, I am using the timed_wait from boost C++ library and I am getting a problem with leap seconds. Here is a quick example from boosts documentation: boost::system_time const timeout=boost::get_system_time() + boost::posix_time::milliseconds(500); extern bool done; extern boost::mutex m; extern boost::condition_variable cond; boost::unique_lock<boost::mutex> lk(m); while(!done) { if(!cond.timed_wait(lk,timeout)) { throw "timed out"; } } The timed_wait function is returning 24 seconds earlier than it should. 24 seconds is the current amount of leap seconds in UTC. So, boost is widely used but I could not find any info about this particular problem. Has anyone else experienced this problem? What are the possible causes and solutions? Notes: I am using boost 1.38 on a linux system. I've heard that this problem doesn't happen on MacOS.

    Read the article

  • boost timer usage question

    - by stefita
    I have a really simple question, yet I can't find an answer for it. I guess I am missing something in the usage of the boost timer.hpp. Here is my code, that unfortunately gives me an error message: include <boost/timer.hpp> int main() { boost::timer t; } And the error messages are as follows: /usr/include/boost/timer.hpp: In member function ‘double boost::timer::elapsed_max() const’: /usr/include/boost/timer.hpp:59: error: ‘numeric_limits’ is not a member of ‘std’ /usr/include/boost/timer.hpp:59: error: ‘::max’ has not been declared /usr/include/boost/timer.hpp:59: error: expected primary-expression before ‘double’ /usr/include/boost/timer.hpp:59: error: expected `)' before ‘double’ The used library is boost 1.36 (SUSE 11.1). Thanks in advance!

    Read the article

  • Accomplishing boost::shared_from_this() in constructor via boost::shared_from_raw(this)

    - by Kyle
    Googling and poking around the boost code, it appears that it's now possible to construct a shared_ptr to this in a constructor, by inheriting from enable_shared_from_raw and calling shared_from_raw(this) Is there any documentation or examples of this? I'm finding nothing with google. Why am I not finding any useful buzz on this on google? I would have thought using shared_from_this in a constructor would be a hot/desirable item. Should I be inheriting from both enable_shared_from_raw and enable_shared_from_this, and restricting my usage of enable_shared_from_raw when I have to? If so, why? Is there a performance hit with shared_from_raw?

    Read the article

  • Win32 reset event like synchronization class with boost C++

    - by fgungor
    I need some mechanism reminiscent of Win32 reset events that I can check via functions having the same semantics with WaitForSingleObject() and WaitForMultipleObjects() (Only need the ..SingleObject() version for the moment) . But I am targeting multiple platforms so all I have is boost::threads (AFAIK) . I came up with the following class and wanted to ask about the potential problems and whether it is up to the task or not. Thanks in advance. class reset_event { bool flag, auto_reset; boost::condition_variable cond_var; boost::mutex mx_flag; public: reset_event(bool _auto_reset = false) : flag(false), auto_reset(_auto_reset) { } void wait() { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) return; cond_var.wait(LOCK); if (auto_reset) flag = false; } bool wait(const boost::posix_time::time_duration& dur) { boost::unique_lock<boost::mutex> LOCK(mx_flag); bool ret = cond_var.timed_wait(LOCK, dur) || flag; if (auto_reset && ret) flag = false; return ret; } void set() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = true; cond_var.notify_all(); } void reset() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = false; } }; Example usage; reset_event terminate_thread; void fn_thread() { while(!terminate_thread.wait(boost::posix_time::milliseconds(10))) { std::cout << "working..." << std::endl; boost::this_thread::sleep(boost::posix_time::milliseconds(1000)); } std::cout << "thread terminated" << std::endl; } int main() { boost::thread worker(fn_thread); boost::this_thread::sleep(boost::posix_time::seconds(1)); terminate_thread.set(); worker.join(); return 0; } EDIT I have fixed the code according to Michael Burr's suggestions. My "very simple" tests indicate no problems. class reset_event { bool flag, auto_reset; boost::condition_variable cond_var; boost::mutex mx_flag; public: explicit reset_event(bool _auto_reset = false) : flag(false), auto_reset(_auto_reset) { } void wait() { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) { if (auto_reset) flag = false; return; } do { cond_var.wait(LOCK); } while(!flag); if (auto_reset) flag = false; } bool wait(const boost::posix_time::time_duration& dur) { boost::unique_lock<boost::mutex> LOCK(mx_flag); if (flag) { if (auto_reset) flag = false; return true; } bool ret = cond_var.timed_wait(LOCK, dur); if (ret && flag) { if (auto_reset) flag = false; return true; } return false; } void set() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = true; cond_var.notify_all(); } void reset() { boost::lock_guard<boost::mutex> LOCK(mx_flag); flag = false; } };

    Read the article

  • Composing adaptors in Boost::range

    - by bruno nery
    I started playing with Boost::Range in order to have a pipeline of lazy transforms in C++]1. My problem now is how to split a pipeline in smaller parts. Suppose I have: int main(){ auto map = boost::adaptors::transformed; // shorten the name auto sink = generate(1) | map([](int x){ return 2*x; }) | map([](int x){ return x+1; }) | map([](int x){ return 3*x; }); for(auto i : sink) std::cout << i << "\n"; } And I want to replace the first two maps with a magic_transform, i.e.: int main(){ auto map = boost::adaptors::transformed; // shorten the name auto sink = generate(1) | magic_transform() | map([](int x){ return 3*x; }); for(auto i : sink) std::cout << i << "\n"; } How would one write magic_transform? I looked up Boost::Range's documentation, but I can't get a good grasp of it.

    Read the article

  • Compare two variant with boost static_visitor

    - by Zozzzzz
    I started to use the boost library a few days ago so my question is maybe trivial. I want to compare two same type variants with a static_visitor. I tried the following, but it don't want to compile. struct compare:public boost::static_visitor<bool> { bool operator()(int& a, int& b) const { return a<b; } bool operator()(double& a, double& b) const { return a<b; } }; int main() { boost::variant<double, int > v1, v2; v1 = 3.14; v2 = 5.25; compare vis; bool b = boost::apply_visitor(vis, v1,v2); cout<<b; return 0; } Thank you for any help or suggestion!

    Read the article

  • Using boost locks for RAII access to a semaphore

    - by dan
    Suppose I write a C++ semaphore class with an interface that models the boost Lockable concept (i.e. lock(); unlock(); try_lock(); etc.). Is it safe/recommended to use boost locks for RAII access to such an object? In other words, do boost locks (and/or other related parts of the boost thread library) assume that the Lockable concept will only be modeled by mutex-like objects which are locked and unlocked from the same thread? My guess is that it should be OK to use a semaphore as a model for Lockable. I've browsed through some of the boost source and it "seems" OK. The locks don't appear to store explicit references to this_thread or anything like that. Moreover, the Lockable concept doesn't have any function like whichThreadOwnsMe(). It also looks like I should even be able to pass a boost::unique_lock<MySemaphore> reference to boost::condition_variable_any::wait. However, the documentation is not explicitly clear about the requirements. To illustrate what I mean, consider a bare-bones binary semaphore class along these lines: class MySemaphore{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } Now suppose that somewhere, either on an object or globally, I have MySemaphore sem; I want to lock and unlock it using RAII. Also I want to be able to "pass" ownership of the lock from one thread to another. For example, in one thread I execute void doTask() { boost::unique_lock<MySemaphore> lock(sem); doSomeWorkWithSharedObject(); signalToSecondThread(); waitForSignalAck(); lock.release(); } While another thread is executing something like { waitForSignalFromFirstThread(); ackSignal(); boost::unique_lock<MySemaphore>(sem,boost::adopt_lock_t()); doMoreWorkWithSameSharedObject(); } The reason I am doing this is that I don't want anyone else to be able to get the lock on sem in between the time that the first thread executes doSomeWorkWithSharedObject() and the time the second executes doMoreWorkWithSameSharedObject(). Basically, I'm splitting one task into two parts. And the reason I'm splitting the task up is because (1) I want the first part of the task to get started as soon as possible, (2) I want to guarantee that the first part is complete before doTask() returns, and (3) I want the second, more time-consuming part of the task to be completed by another thread, possibly chosen from a pool of slave threads that are waiting around to finish tasks that have been started by master threads. NOTE: I recently posted this same question (sort of) here http://stackoverflow.com/questions/2754884/unlocking-a-mutex-from-a-different-thread-c but I confused mutexes with semaphores, and so the question about using boost locks didn't really get addressed.

    Read the article

  • boost::program_options bug or feature?

    - by Dmitriy
    Very simple example: #include <string> #include <boost/program_options.hpp> namespace po = boost::program_options; int main(int argc, char* argv[]) { po::options_description recipients("Recipient(s)"); recipients.add_options() ("csv", po::value<std::string>(), "" ) ("csv_name", po::value<unsigned>(), "" ) ; po::options_description cmdline_options; cmdline_options.add(recipients); po::variables_map vm; po::store(po::command_line_parser(argc, argv).options(cmdline_options).run(), vm); po::notify(vm); return 0; } And some tests: >Test --csv test in option 'csv_name': invalid option value >Test --csv_name test in option 'csv_name': invalid option value >Test --csv_name 0 >Test --csv text in option 'csv_name': invalid option value >Test --csv 0 >Test --csv_name 0 >Test --csv_name 0 --csv text multiple occurrences Looks like that boost::program_option threats parameter "csv" as "csv_name". Is it a feature or bug?

    Read the article

  • Reading from serial port with Boost Asio?

    - by trikri
    Hi! I'm going to check for incoming messages (data packages) on the serial port, using Boost Asio. Each message will start with a header that is one byte long, and will specify which type of the message has been sent. Each different type of message has an own length. The function I'm about to write should check for new incoming messages continually, and when it finds one it should read it, and then some other function should parse it. I thought that the code might look something like this: void check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { s1 = streambuf_to_string(response); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { s2 = streambuf_to_string(response); // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Is boost::asio::streambuf is the right thing to use? And how do I extract the data from it so I can parse the message? I also want to know if I need to have a separate thread which only calls this function, so that it get called more often? Isn't there a risk for loosing data in between two calls to the function otherwise, because so much data comes in that it can't be stored in the serial ports memory? I'm using Qt as a widget toolkit and I don't really know how long time it needs to process all it's events.

    Read the article

  • Boost::Thread linking error on OSX?

    - by gct
    So I'm going nuts trying to figure this one out. Here's my basic setup: I'm compiling a shared library with a bunch of core functionality that uses a lot of boost stuff. We'll call this library libpf_core.so. It's linked with the boost static libraries, specifically the python, system, filesystem, thread, and program_options libraries. This all goes swimmingly. Now, I have a little test program called test_socketio which is compiled into a shared library (it's loaded as a plugin at runtime). It uses some boost stuff like boost::bind and boost::thread, and it's linked again libpf_core.so (which has the boost libraries included remember). When I go to compile test_socketio though, out of all my plugins it gives me a linking error: [ Building test_socketio ] g++ -c -pg -g -O0 -I/usr/local/include -I../include test_socketio.cc -o test_socketio.o g++ -shared test_socketio.o -lpy_core -o test_socketio.so Undefined symbols: "boost::lock_error::lock_error()", referenced from: boost::unique_lock<boost::mutex>::lock() in test_socketio.o ld: symbol(s) not found collect2: ld returned 1 exit status And I'm going crazy trying to figure out why this is. I've tried explicitly linking boost::thread into the plugin to no avail, tried ensuring that I'm using the boost headers associated with the libraries linked into libpf_core.so in case there was a conflict there. Is there something OSX specific regarding boost that I'm missing? In my searching on google I've seen a number of other people get this error but no one seems to have come up with a satisfactory solution.

    Read the article

  • boost::serialization with mutable members

    - by redmoskito
    Using boost::serialization, what's the "best" way to serialize an object that contains cached, derived values in mutable members, such that cached members aren't serialized, but on deserialization, they are initialized the their appropriate default. A definition of "best" follows later, but first an example: class Example { public: Example(float n) : num(n), sqrt_num(-1.0) {} float get_num() const { return num; } // compute and cache sqrt on first read float get_sqrt() const { if(sqrt_num < 0) sqrt_num = sqrt(num); return sqrt_num; } template <class Archive> void serialize(Archive& ar, unsigned int version) { ... } private: float num; mutable float sqrt_num; }; On serialization, only the "num" member should be saved. On deserialization, the sqrt_num member must be initialized to its sentinel value indicating it needs to be computed. What is the most elegant way to implement this? In my mind, an elegant solution would avoid splitting serialize() into separate save() and load() methods (which introduces maintenance problems). One possible implementation of serialize: template <class Archive> void serialize(Archive& ar, unsigned int version) { ar & num; sqrt_num = -1.0; } This handles the deserialization case, but in the serialization case, the cached value is killed and must be recomputed. Also, I've never seen an example of boost::serialize that explicitly sets members inside of serialize(), so I wonder if this is generally not recommended. Some might suggest that the default constructor handles this, for example: int main() { Example e; { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> e; } cout << e.get_sqrt() << endl; return 0; } which works in this case, but I think fails if the object receiving the deserialized data has already been initialized, as in the example below: int main() { Example ex1(4); Example ex2(9); cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; // the following two blocks should implement ex2 = ex1; // save ex1 to archive { std::ofstream ofs("filename"); boost::archive::text_oarchive oa(ofs); oa << ex1; } // read it back into ex2 { std::ifstream ifs("filename"); boost::archive::text_iarchive ia(ifs); ia >> ex2; } // these should be equal now, but aren't, // since Example::serialize() doesn't modify num_sqrt cout << ex1.get_sqrt() << endl; // outputs 2; cout << ex2.get_sqrt() << endl; // outputs 3; return 0; } I'm sure this issue has come up with others, but I have struggled to find any documentation on this particular scenario. Thanks!

    Read the article

  • Wrapping a pure virtual method with multiple arguments with Boost.Python

    - by fallino
    Hello, I followed the "official" tutorial and others but still don't manage to expose this pure virtual method (getPeptide) : ms_mascotresults.hpp class ms_mascotresults { public: ms_mascotresults(ms_mascotresfile &resfile, const unsigned int flags, double minProbability, int maxHitsToReport, const char * unigeneIndexFile, const char * singleHit = 0); ... virtual ms_peptide getPeptide(const int q, const int p) const = 0; } ms_mascotresults.cpp #include <boost/python.hpp> using namespace boost::python; #include "msparser.hpp" // which includes "ms_mascotresults.hpp" using namespace matrix_science; #include <iostream> #include <sstream> struct ms_mascotresults_wrapper : ms_mascotresults, wrapper<ms_mascotresults> { ms_peptide getPeptide(const int q, const int p) { this->get_override("getPeptide")(q); this->get_override("getPeptide")(p); } }; BOOST_PYTHON_MODULE(ms_mascotresults) { class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) ; } Here are the bjam's errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: ... include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ms_mascotresults.cpp: In constructor ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’: ms_mascotresults.cpp:12: error: no matching function for call to ‘matrix_science::ms_mascotresults::ms_mascotresults()’ include/ms_mascotresults.hpp:284: note: candidates are: matrix_science::ms_mascotresults::ms_mascotresults(matrix_science::ms_mascotresfile&, unsigned int, double, int, const char*, const char*) include/ms_mascotresults.hpp:109: note: matrix_science::ms_mascotresults::ms_mascotresults(const matrix_science::ms_mascotresults&) ... /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp: In constructor ‘boost::python::objects::value_holder<Value>::value_holder(PyObject*) [with Value = ms_mascotresults_wrapper]’: /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: note: synthesized method ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper()’ first required here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: cannot allocate an object of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: since type ‘ms_mascotresults_wrapper’ has pure virtual functions So I tried to change the constructor's signature by : BOOST_PYTHON_MODULE(ms_mascotresults) { //class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults") class_<ms_mascotresults_wrapper, boost::noncopyable>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>()) .def("getPeptide", pure_virtual(&ms_mascotresults::getPeptide) ) Giving these errors : /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:66: error: cannot declare field ‘boost::python::objects::value_holder<ms_mascotresults_wrapper>::m_held’ to be of abstract type ‘ms_mascotresults_wrapper’ ms_mascotresults.cpp:12: note: because the following virtual functions are pure within ‘ms_mascotresults_wrapper’: include/ms_mascotresults.hpp:334: note: virtual matrix_science::ms_peptide matrix_science::ms_mascotresults::getPeptide(int, int) const ... ms_mascotresults.cpp:24: instantiated from here /usr/local/boost_1_42_0/boost/python/object/value_holder.hpp:137: error: no matching function for call to ‘ms_mascotresults_wrapper::ms_mascotresults_wrapper(matrix_science::ms_mascotresfile&, const unsigned int&, const double&, const int&, const char* const&, const char* const&)’ ms_mascotresults.cpp:12: note: candidates are: ms_mascotresults_wrapper::ms_mascotresults_wrapper(const ms_mascotresults_wrapper&) ms_mascotresults.cpp:12: note: ms_mascotresults_wrapper::ms_mascotresults_wrapper() If I comment the virtual function getPeptide in the .hpp, it builds perfectly with this constructor : class_<ms_mascotresults>("ms_mascotresults", init<ms_mascotresfile &, const unsigned int, double, int, const char *,const char *>() ) So I'm a bit lost...

    Read the article

  • Boost Serialization Library upgrade

    - by Konstantin
    Hello! How do I know that I can safely upgrade Boost Serialization Library on a production system without breaking compatibility with the existing data ? Is there any test that I should perform in order to be sure that all data stored in the binary format by previous version of the library will be successfully read by the new one ? Does Boost Serialization library itself guarantee some sort of compatibility between versions ?

    Read the article

  • Simplest way to get current time in current timezone using boost::date_time ?

    - by timday
    If I do date +%H-%M-%S on the commandline (Debian/Lenny), I get a user-friendly (not UTC, not DST-less, the time a normal person has on their wristwatch) time printed. What's the simplest way to obtain the same thing with boost::date_time ? If I do this: std::ostringstream msg; boost::local_time::local_date_time t = boost::local_time::local_sec_clock::local_time( boost::local_time::time_zone_ptr() ); boost::local_time::local_time_facet* lf( new boost::local_time::local_time_facet("%H-%M-%S") ); msg.imbue(std::locale(msg.getloc(),lf)); msg << t; Then msg.str() is an hour earlier than the time I want to see. I'm not sure whether this is because it's showing UTC or local timezone time without a DST correction (I'm in the UK). What's the simplest way to modify the above to yield the DST corrected local timezone time ? I have an idea it involves boost::date_time:: c_local_adjustor but can't figure it out from the examples.

    Read the article

  • Parsing string, with Boost Spirit 2, to fill data in user defined struct

    - by Surya
    I'm using Boost.Spirit which was distributed with Boost-1.42.0 with VS2005. My problem is like this. I've this string which was delimted with commas. The first 3 fields of it are strings and rest are numbers. like this. String1,String2,String3,12.0,12.1,13.0,13.1,12.4 My rule is like this qi::rule<string::iterator, qi::skip_type> stringrule = *(char_ - ',') qi::rule<string::iterator, qi::skip_type> myrule= repeat(3)[*(char_ - ',') >> ','] >> (double_ % ',') ; I'm trying to store the data in a structure like this. struct MyStruct { vector<string> stringVector ; vector<double> doubleVector ; } ; MyStruct var ; I've wrapped it in BOOST_FUSION_ADAPT_STRUCTURE to use it with spirit. BOOST_FUSION_ADAPT_STRUCT (MyStruct, (vector<string>, stringVector) (vector<double>, doubleVector)) My parse function parses the line and returns true and after qi::phrase_parse (iterBegin, iterEnd, myrule, boost::spirit::ascii::space, var) ; I'm expecting var.stringVector and var.doubleVector are properly filled. but it is not the case. What is going wrong ? Thanks in advance, Surya

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >