Search Results

Search found 1988 results on 80 pages for 'boost spirit karma'.

Page 20/80 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How to I count key collisions when using boost::unordered_map?

    - by Nikhil
    I have a data structure with 15 unsigned longs, I have defined a hash function using hash_combine as follows: friend std::size_t hash_value(const TUPLE15& given) { std::size_t seed = 0; boost::hash_combine(seed, val1); boost::hash_combine(seed, val2); ... return seed; } I insert a large number of values into a boost::unordered_map but the performance is not good enough. Probably, I could do better with an alternative hashing function. To confirm this, I need to check how many collisions I am getting. How do I do this?

    Read the article

  • How do you manually insert options into boost.Program_options?

    - by windfinder
    I have an application that uses Boost.Program_options to store and manage its configuration options. We are currently moving away from configuration files and using database loaded configuration instead. I've written an API that reads configuration options from the database by hostname and instance name. (cool!) However, as far as I can see there is no way to manually insert these options into the boost Program_options. Has anyone used this before, any ideas? The docs from boost seem to indicate the only way to get stuff in that map is by the store function, which either reads from the command line or config file (not what I want). Basically looking for a way to manually insert the DB read values in to the map.

    Read the article

  • Serializing network messages

    - by mtsvetkov
    I am writing a network wrapper around boost::asio and was wondering what is a good and simple way to serialize my messages. I have a message factory which can take care of dispatching the data to the correct builder, but I want to know if there are any established solutions for getting the binary data on the sender side and consequently passing the data for deserialization on the receiver end. Some options I've explored are: passing a pointer to a char[] to the serialize/deserialize functions (for serialize to write to, and deserialize to read from), but it's difficult to enforce buffer size this way; building on that, I decided to have the serialize function return a boost::asio::mutable_buffer, however ownership of the memory gets blurred between multiple classes, as the network wrapper needs to clean up the memory allocated by the message builder. I have also seen solutions involving streambuf's and stringstream's, but manipulating binary data in terms of its string representation is something I want to avoid. Is there some sort of binary stream I can use instead? What I am looking for is a solution (preferrably using boost libs) that lets the message builder dictate the amount of memory allocated during serialization and what that would look like in terms of passing the data around between the wrapper and message factory/message builders. PS. Messages contain almost exclusively built-in types and PODs and form a shallow but wide hierarchy for the sake of going through a factory. Note: a link to examples of using boost::serialization for something like this would be appreciated as I'm having difficulties figuring out the relation between it and buffers.

    Read the article

  • Unlocking a mutex from a different thread (C++)

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect.

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Python bindings for C++ code using OpenCV giving segmentation fault

    - by lightalchemist
    I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter. I've looked at OpenCV's source and found the file cv2.cpp which has conversions functions to perform conversions to and fro between PyObject* and OpenCV's Mat. I made use of those conversions functions but got a segmentation fault when I tried to use them. I basically need some suggestions/sample code/online references on how to interface python and C++ code that make use of OpenCV, specifically with the ability to return OpenCV's C++ Mat to the python interpreter or perhaps suggestions on how/where to start investigating the cause of the segmentation fault. Currently I'm using Boost Python to wrap the code. Thanks in advance to any replies. The relevant code: // This is the function that is giving the segmentation fault. PyObject* ABC::doSomething(PyObject* image) { Mat m; pyopencv_to(image, m); // This line gives segmentation fault. // Some code to create cppObj from CPP library that uses OpenCV cv::Mat processedImage = cppObj->align(m); return pyopencv_from(processedImage); } The conversion functions taken from OpenCV's source follows. The conversion code gives segmentation fault at the commented line with "if (!PyArray_Check(o)) ...". static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) // Segmentation fault inside PyArray_Check(o) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); } My python test program: import pysomemodule # My python wrapped library. import cv2 def main(): myobj = pysomemodule.ABC("faces.train") # Create python object. This works. image = cv2.imread('61.jpg') processedImage = myobj.doSomething(image) cv2.imshow("test", processedImage) cv2.waitKey() if __name__ == "__main__": main()

    Read the article

  • Upcoming Webcast: Use Visual Decision Making To Boost the Pace of Product Innovation – October 24, 2013

    - by Gerald Fauteux
    See More, Do More Use Visual Decision Making To Boost the Pace of Product Innovation   Join a Free Webcast hosted by Oracle, featuring QUALCOMM Click here to register for this webcast   Keeping innovation ahead of shrinking product lifecycles continues to be a challenge in today’s fast-paced business environment, but new visualization techniques in the product design and development process are helping businesses widen the gap further.  Innovative visualization methods, including Augmented Business Visualization, can be powerful differentiators for business leaders, especially when it comes to accelerating product cycles.   Don’t miss this opportunity to discover how visualization tied to PLM can help empower visual decision making and enhance productivity across your organization.  See more and do more with the power of Oracle. Join solution experts from Oracle and special guest, Ravi Sankaran, Sr. Staff Systems Analyst, QUALCOMM to discuss how visual decision making can help efficiently ramp innovation efforts throughout the product lifecycle: Advance collaboration with universal access across all document types with robust security measures in place Synthesize product information quickly like cost, quality, compliance, etc. in a highly visual form from multiple sources in a single visual and actionable environment Increase productivity by rendering documents in the appropriate context of specific business processes Drive modern business transformation with new collaboration methods such as Augmented Business Visualization . Date: Thursday, October 24, 2013 Time: 10:00 a.m. PDT / 1:00 p.m. EDT Click here to register for this FREE event

    Read the article

  • Why isn't the reference counter in boost::shared_ptr volatile?

    - by Johann Gerell
    In the boost::shared_ptr destructor, this is done: if(--*pn == 0) { boost::checked_delete(px); delete pn; } where pn is a pointer to the reference counter, which is typedefed as shared_ptr::count_type -> detail::atomic_count -> long I would have expected the long to be volatile long, given threaded usage and the non-atomic 0-check-and-deletion in the shared_ptr destructor above. Why isn't it volatile?

    Read the article

  • How to fit a custom graph to the boost graph library template?

    - by Michael
    I'm rusty on C++ templates and I'm using the boost graph library (a fatal combination). I've searched the web and can't find any direct instructions on how to take a custom graph structure and fit enough of it to BGL (boost graph library) that I can use boosts graph traversing algorithms. Anyone familiar enough with the library to help me out?

    Read the article

  • How to condense a path in C++ using Boost?

    - by pdillon3
    Does Boost offer a simple way to condense a path such as /foo/bar/../bar or /foo/../. into the absolute path it refers to. /foo/bar/../bar -- /foo/bar /foo/../. -- / The goal is to combine base_path and rel_path with boost::filesystem::complete(rel_path, base_path) into a path into /foo/bar/with/no/dots. thanks

    Read the article

  • C++ boost mpl vector

    - by Gokul
    I understand that the following code won't work, as i is a runtime parameter and not a compile time parameter. But i want to know, whether there is a way to achieve the same. i have a list of classes and i need to call a template function, with each of these classes. void GucTable::refreshSessionParams() { typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; for( int i = 0; i < boost::mpl::size<SessionParams>::value; ++i ) boost::mpl::at<SessionParams, i>::type* sparam = g_getSessionParam< boost::mpl::at<SessionParams, i>::type >(); sparam->updateFromGucTable(this); } } Can someone suggest me a easy and elegant way to perform the same? i need to iterate through the mpl::vector and use the type to call a global function and then use that parameter to do some run-time operations. Thanks in advance, Gokul. Working code typedef boost::mpl::vector< SessionXactDetails, SessionSchemaInfo > SessionParams; class GucSessionIterator { private: GucTable& m_table; public: GucSessionIterator(GucTable& table) :m_table(table) { } template< typename U > void operator()(const U& ) { g_getSessionParam<U>()->updateFromGucTable(m_table); } }; void GucTable::refreshSessionParams() { boost::mpl::for_each< SessionParams >( GucSessionIterator(*this) ); return; }

    Read the article

  • g++: Use ZIP files as input

    - by Notinlist
    We have the Boost library in our side. It consists of huge amount of files which are not changing ever and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows. It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like: // @ZIPFS ASSIGN 'boost' 'boost.zip/boost' #include <boost/smart_ptr/shared_ptr.hpp> Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?

    Read the article

  • VC9 C1083 Cannot open include file: 'boost...' after trying to abstract an include dependency

    - by ronivek
    Hey, So I've been working on a project for the past number of weeks and it uses a number of Boost libraries. In particular I'm using the boost::dynamic_bitset library quite extensively. I've had zero issues up until now; but tonight I discovered a dependency between some includes which I had to resolve; and I tried to do so by providing an abstract callback class. Effectively I now have the following: First include... class OtherClassCallback { public: virtual int someOtherMethod() const = 0; }; class SomeClass { public: void someMethod(OtherClassCallback *oc) { ... oc->someOtherMethod(); ... } }; Second include... #include "SomeClass.h" class SomeOtherClass : public OtherClassCallback { public: int someOtherMethod() const { return this->someInt; } }; Here is the issue; ever since I implemented this class I'm now getting the following error: fatal error C1083: Cannot open include file: 'boost/dynamic_bitset/dynamic_bitset.hpp': No such file or directory Now I'm getting no other compiler errors; and it's a pretty substantial project. My include paths and so on are perfect; my files are fully accessible and removing the changes fixes the issue. Does anyone have any idea what might be going on? I'm compiling to native Windows executables in VS9. I should confess that I'm very inexperienced with C++ in general so go easy on me if it's something horribly straightforward; I can't figure it out.

    Read the article

  • When does an asio timer go out of scope?

    - by ApplePieIsGood
    What I mean is, let's say you do an async_wait on an asio timer and bind the update to a function that takes a reference to a type T. Let's say you created the T initially on the stack before passing it to async_wait. At the end of that async_wait, it calls async_wait itself, renewing the timer over and over. Does that stack allocated type T stay alive until the first time the timer doesn't renew itself, or after the first invocation of the function will the T go out of scope?

    Read the article

  • Cumulative +1/-1 Cointoss crashes on 1000 iterations. Please advise; c++ boost random libraries

    - by user1731972
    following some former advice Multithreaded application, am I doing it right? I think I have a threadsafe number generator using boost, but my program crashes when I input 1000 iterations. The output .csv file when graphed looks right, but I'm not sure why it's crashing. It's using _beginthread, and everyone is telling me I should use the more (convoluted) _beingthreadex, which I'm not familiar with. If someone could recommend an example, I would greatly appreciate it. Also... someone pointed out I should be applying a second parameter to my _beginthread for the array counting start positions, but I have no idea how to pass more than one parameter, other than attempting to use a structure, and I've read structure's and _beginthread don't get along (although, I could just use the boost threads...) #include <process.h> #include <windows.h> #include <iostream> #include <fstream> #include <time.h> #include <random> #include <boost/random.hpp> //for srand48_r(time(NULL), &randBuffer); which doesn't work #include <stdio.h> #include <stdlib.h> //#include <thread> using namespace std; using namespace boost; using namespace boost::random; void myThread0 (void *dummy ); void myThread1 (void *dummy ); void myThread2 (void *dummy ); void myThread3 (void *dummy ); //for random seeds void initialize(); //from http://stackoverflow.com/questions/7114043/random-number-generation-in-c11-how-to-generate-how-do-they-work uniform_int_distribution<> two(1,2); typedef std::mt19937 MyRNG; // the Mersenne Twister with a popular choice of parameters uint32_t seed_val; // populate somehow MyRNG rng1; // e.g. keep one global instance (per thread) MyRNG rng2; // e.g. keep one global instance (per thread) MyRNG rng3; // e.g. keep one global instance (per thread) MyRNG rng4; // e.g. keep one global instance (per thread) //only needed for shared variables //CRITICAL_SECTION cs1,cs2,cs3,cs4; // global int main() { ofstream myfile; myfile.open ("coinToss.csv"); int rNum; long numRuns; long count = 0; int divisor = 1; float fHolder = 0; long counter = 0; float percent = 0.0; //? //unsigned threadID; //HANDLE hThread; initialize(); HANDLE hThread[4]; const int size = 100000; int array[size]; printf ("Runs (uses multiple of 100,000) "); cin >> numRuns; for (int a = 0; a < numRuns; a++) { hThread[0] = (HANDLE)_beginthread( myThread0, 0, (void*)(array) ); hThread[1] = (HANDLE)_beginthread( myThread1, 0, (void*)(array) ); hThread[2] = (HANDLE)_beginthread( myThread2, 0, (void*)(array) ); hThread[3] = (HANDLE)_beginthread( myThread3, 0, (void*)(array) ); //waits for threads to finish before continuing WaitForMultipleObjects(4, hThread, TRUE, INFINITE); //closes handles I guess? CloseHandle( hThread[0] ); CloseHandle( hThread[1] ); CloseHandle( hThread[2] ); CloseHandle( hThread[3] ); //dump array into calculations //average array into fHolder //this could be split into threads as well for (int p = 0; p < size; p++) { counter += array[p] == 2 ? 1 : -1; //cout << array[p] << endl; //cout << counter << endl; } //this fHolder calculation didn't work //fHolder = counter / size; //so I had to use this cout << counter << endl; fHolder = counter; fHolder = fHolder / size; myfile << fHolder << endl; } } void initialize() { //seed value needs to be supplied //rng1.seed(seed_val*1); rng1.seed((unsigned int)time(NULL)); rng2.seed(((unsigned int)time(NULL))*2); rng3.seed(((unsigned int)time(NULL))*3); rng4.seed(((unsigned int)time(NULL))*4); }; void myThread0 (void *param) { //EnterCriticalSection(&cs1); //aquire the critical section object int *i = (int *)param; for (int x = 0; x < 25000; x++) { //doesn't work, part of merssene twister //i[x] = next(); i[x] = two(rng1); //original srand //i[x] = rand() % 2 + 1; //doesn't work for some reason. //uint_dist2(rng); //i[x] = qrand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs1); // release the critical section object } void myThread1 (void *param) { //EnterCriticalSection(&cs2); //aquire the critical section object int *i = (int *)param; for (int x = 25000; x < 50000; x++) { //param[x] = rand() % 2 + 1; i[x] = two(rng2); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs2); // release the critical section object } void myThread2 (void *param) { //EnterCriticalSection(&cs3); //aquire the critical section object int *i = (int *)param; for (int x = 50000; x < 75000; x++) { i[x] = two(rng3); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs3); // release the critical section object } void myThread3 (void *param) { //EnterCriticalSection(&cs4); //aquire the critical section object int *i = (int *)param; for (int x = 75000; x < 100000; x++) { i[x] = two(rng4); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs4); // release the critical section object }

    Read the article

  • functional, bind1st and mem_fun

    - by Neil G
    Why won't this compile? #include <functional> #include <boost/function.hpp> class A { A() { typedef boost::function<void ()> FunctionCall; FunctionCall f = std::bind1st(std::mem_fun(&A::process), this); } void process() {} }; Errors: In file included from /opt/local/include/gcc44/c++/bits/stl_function.h:712, from /opt/local/include/gcc44/c++/functional:50, from a.cc:1: /opt/local/include/gcc44/c++/backward/binders.h: In instantiation of 'std::binder1st<std::mem_fun_t<void, A> >': a.cc:7: instantiated from here /opt/local/include/gcc44/c++/backward/binders.h:100: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:103: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:106: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:111: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h:117: error: no type named 'second_argument_type' in 'class std::mem_fun_t<void, A>' /opt/local/include/gcc44/c++/backward/binders.h: In function 'std::binder1st<_Operation> std::bind1st(const _Operation&, const _Tp&) [with _Operation = std::mem_fun_t<void, A>, _Tp = A*]': a.cc:7: instantiated from here /opt/local/include/gcc44/c++/backward/binders.h:126: error: no type named 'first_argument_type' in 'class std::mem_fun_t<void, A>' In file included from /opt/local/include/boost/function/detail/maybe_include.hpp:13, from /opt/local/include/boost/function/detail/function_iterate.hpp:14, from /opt/local/include/boost/preprocessor/iteration/detail/iter/forward1.hpp:47, from /opt/local/include/boost/function.hpp:64, from a.cc:2: /opt/local/include/boost/function/function_template.hpp: In static member function 'static void boost::detail::function::void_function_obj_invoker0<FunctionObj, R>::invoke(boost::detail::function::function_buffer&) [with FunctionObj = std::binder1st<std::mem_fun_t<void, A> >, R = void]': /opt/local/include/boost/function/function_template.hpp:913: instantiated from 'void boost::function0<R>::assign_to(Functor) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' /opt/local/include/boost/function/function_template.hpp:722: instantiated from 'boost::function0<R>::function0(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' /opt/local/include/boost/function/function_template.hpp:1064: instantiated from 'boost::function<R()>::function(Functor, typename boost::enable_if_c<boost::type_traits::ice_not::value, int>::type) [with Functor = std::binder1st<std::mem_fun_t<void, A> >, R = void]' a.cc:7: instantiated from here /opt/local/include/boost/function/function_template.hpp:153: error: no match for call to '(std::binder1st<std::mem_fun_t<void, A> >) ()'

    Read the article

  • How can I boost the volume of my Video

    - by Sunny Shah.
    I have a video that I need to pass on to some of my friends but it has very low audio volume. How can I boost the audio volume in this video so that it has a similar level as my other videos? Is there a video converter that can boost the audio volume?

    Read the article

  • Anyone have an XSL to convert Boost.Test XML logs to a presentable format?

    - by Stuart Lange
    I have some C++ projects running through cruisecontrol.net. As a part of the build process, we compile and run Boost.Test unit test suites. I have these configured to dump XML log files. While the format is similar to JUnit/NUnit, it's not quite the same (and lacks some information), so cruisecontrol.net is unable to pick them up. I am wondering if anyone has created (or knows of) an existing XSL transform that will convert Boost.Test results to JUnit/NUnit format, or alternatively, directly to a presentable (html) format. Thanks!

    Read the article

  • How can I convert Perl regular expressions to boost regular expressions?

    - by YY
    I'm not familiar with Perl and boost regular expression and I want to convert a Perl code to c++. I want to convert special regular expression in Perl into c++ using Boost regexp library. Please help me understand what I must do? Here is some regexps that a word of a sentence may match: if ($word =~ /^[\.:\,()\'\`-]/) { # hack for punctuation } if ($word =~ /^[A-Z]/) { return; } if ($word =~ /[A-Za-z0-9]+\-[A-Za-z0-9]+/) { # all hyphenated words return; } if ($word =~ /.*[0-9].*/) { # all numbers return; }

    Read the article

  • C++ Boost ASIO: how to read/write with a timeout?

    - by Stéphane
    From reading other Stackoverflow entries and the boost::asio documentation, I've confirmed that there is no synchronous asio read/write calls that also provide an easy-to-use timeout as a parameter to the call. I'm in the middle of converting an old-school linux socket app with select(2) calls that employs timeouts, and I need to do more-or-less the same. So what is the best way to do this in boost::asio? Looking at the asio documentation, there are many confusing examples of various things to do with timers, but I'm quite confused. I'd love to see a simple-to-read example of this: Read from a socket, but wait for a maximum of X seconds after which the function either returns with nothing, or returns with whatever it was able to read from the socket before the timeout expired.

    Read the article

  • How can I get cmake to find my boost installation

    - by BD at Rivenhill
    I have installed the most recent version of boost in /usr/local (with includes in /usr/local/boost and libraries in /usr/local/lib/boost) and I am now attempting to install Wt from source, but cmake (version 2.6) can't seem to find the boost installation. It tries to give helpful suggestions about setting BOOST_DIR and Boost_LIBRARYDIR, but I haven't been able to get it to work by tweaking these variables. The most recent error message that I get is that it can't find the libraries, but it seems to indicate that it is using "/usr/local/include" for the include path, which isn't correct (and I can't seem to fix it). Does anybody have a solution for this off the top of their head, or do I need to go mucking around inside cmake to figure it out?

    Read the article

  • Lucene boost: I need to make it work better

    - by zvikico
    I'm using Lucene to index components with names and types. Some components are more important, thus, get a bigger boost. However, I cannot get my boost to work properly. I sill get some components appear later (get worse score), even though they have a higher boost. Note that the indexing is done on one field only and I've set the boost to that field alone. I'm using Lucene in Java. I don't think it has anything to do with the field length. I've seen components with the same name (but different type) get the wrong score.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >