Search Results

Search found 2238 results on 90 pages for 'boost variant'.

Page 27/90 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Preparing for the next C++ standard

    - by Neil Butterworth
    The spate of questions regarding BOOST_FOREACH prompts me to ask users of the Boost library what (if anything) they are doing to prepare their code for portability to the proposed new C++ standard (aka C++0x). For example, do you write code like this if you use shared_ptr: #ifdef CPPOX #include <memory> #else #include "boost/shared_ptr.hpp" #endif There is also the namespace issue - in the future, shared_ptr will be part of the std, namespace - how do you deal with that? I'm interested in these questions because I've decided to bite the bullet and start learning boost seriously, and I'd like to use best practices in my code. Not exactly a flood of answers - does this mean it's a non-issue? Anyway, thanks to those that replied; I'm accepting jalfs answer because I like being advised to do nothing!

    Read the article

  • Released object crashes app

    - by John Smith
    I am using objective-C++ (+Boost) for iPhone development. I am in a rather tight loop and need to allocate and release a certain object. The code is something like this. for (int i=0;i<100;i++) { opt = [[FObj alloc] init]; //do stuff with opt [opt release]; } The FObj object is something like @interface FObj MyCPPObj * cppobj; @end In the implementation of FObj there is a dealloc method: -(void) dealloc { delete cppobj; //previously allocated with 'new' [super dealloc]; } I am afraid that if i don't release then the 'MyCPPObj's will just pile up. But releasing makes the app crash after the first loop. What am I doing wrong? Or perhaps should I make cppobj and boost::shared_ptr? (do boost shared pointers automatically release their objects when an objective-C++ object is deleted?)

    Read the article

  • Get result type of function

    - by Robert
    I want to specialize a template function declared as: template<typename Type> Type read(std::istream& is); I then have a lot of static implementations static int read_integer(std::istream& is); a.s.o. Now I'd like to do a macro so that specialization of read is as simple as: SPECIALIZE_READ(read_integer) So I figured I'd go the boost::function_traits way and declare SPECIALIZE_READ as: #define SPECIALIZE_READ(read_function) \ template<> boost::function_traits<read_function>::result_type read(std::istream& is) { \ return read_function(is); \ } but VC++ (2008) compiler complains with: 'boost::function_traits' : 'read_integer' is not a valid template type argument for parameter 'Function' Ideas ?

    Read the article

  • Learning low latency C++ and Java?

    - by user997112
    I'm currently in a role where I dont get to write any C++ or Java. However, the role is good because provides me with exposure to the business side (i'm interested in finance). Eventually I would like to get into high frequency trading infrastructure. Therefore, outside of work hours i'd like to maximise the knowledge I can gain about high performance Java and C++. I already have the Java Performance Tuning book, which is ok but not impressive. Can people recommend anymore latency blogs/books/websites for learning about making C++/C/Java or even Unix very fast? Or perhaps making the network parts of the OS (if re-writing Unix components) faster? EDIT: Or perhaps we could make this THE thread for advice on writing fast code

    Read the article

  • Unix as opposed to Windows (Java and C++)

    - by user997112
    Firstly I should explain the background. I am interested in high frequency trading programming roles. After looking at many job specs it is very clear that there is a big demand for programmers who have programmed Java and C++ on Unix as opposed to Windows. My question is what are the differences a High Freq programmer would come across? It cannot be something in the language itself because syntactically they do not differ over OS? Therefore I thought it must be something which the programming language has to interface, resources etc? Could anyone please help me out as I am trying to improve my C++/Java on Unix, in order to aim for this type of career? ps I'm guessing part of this answer lies with the socket infrastructure on Unix?

    Read the article

  • Help with infrequent segmentation fault in accessing boost::unordered_multimap or struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem. Update I added a new assert and a check just inside the for loop: assert( carriage.size() > 0 ); assert( carriage.count( s ) > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; cout << "carriage.count(" << s << ")=" << carriage.count(s) << endl; for ( it = ret.first; it != ret.second; it++ ) { cout << "(*it).first=" << (*it).first << endl; // error here if ( ((*it).second).recT == recoverTime ) { carriage.erase( it ); } } The EXC_BAD_ACCESS error now appears at the (*it).first call, again after many thousands of successful recoveries. Can anyone give me tips on how to figure out how this problem arises? I'm trying to use gdb. Frame 0 from the backtrace reads "#0 0x0000000100001d50 in Host::recover (this=0x100530d80, s=0, recoverTime=635.91148029170529) at Host.cpp:317" I'm not sure what useful information I can extract here. Update 2 I added a break; after the carriage.erase(it). This works, but I have no idea why (e.g., why it would remove the seg fault at (*it).first.

    Read the article

  • Extracting pair member in lambda expressions and template typedef idiom

    - by Nazgob
    Hi, I have some complex types here so I decided to use nifty trick to have typedef on templated types. Then I have a class some_container that has a container as a member. Container is a vector of pairs composed of element and vector. I want to write std::find_if algorithm with lambda expression to find element that have certain value. To get the value I have to call first on pair and then get value from element. Below my std::find_if there is normal loop that does the trick. My lambda fails to compile. How to access value inside element which is inside pair? I use g++ 4.4+ and VS 2010 and I want to stick to boost lambda for now. #include <vector> #include <algorithm> #include <boost\lambda\lambda.hpp> #include <boost\lambda\bind.hpp> template<typename T> class element { public: T value; }; template<typename T> class element_vector_pair // idiom to have templated typedef { public: typedef std::pair<element<T>, std::vector<T> > type; }; template<typename T> class vector_containter // idiom to have templated typedef { public: typedef std::vector<typename element_vector_pair<T>::type > type; }; template<typename T> bool operator==(const typename element_vector_pair<T>::type & lhs, const typename element_vector_pair<T>::type & rhs) { return lhs.first.value == rhs.first.value; } template<typename T> class some_container { public: element<T> get_element(const T& value) const { std::find_if(container.begin(), container.end(), bind(&typename vector_containter<T>::type::value_type::first::value, boost::lambda::_1) == value); /*for(size_t i = 0; i < container.size(); ++i) { if(container.at(i).first.value == value) { return container.at(i); } }*/ return element<T>(); //whatever } protected: typename vector_containter<T>::type container; }; int main() { some_container<int> s; s.get_element(5); return 0; }

    Read the article

  • Dataflow Programming - Patterns and Frameworks

    - by Styrac
    I just came across the proposed Boost::Dataflow library. It seems like an interesting approach and I was wondering if there are other such alternative frameworks for C++, and if there are any related design patterns. I have not ruled out Boost::Dataflow, I am just looking into any available alternatives so I can understand the domain and my options better (or roll my own if necessary).

    Read the article

  • SCons: How to use the same builders for multiple variants (release/debug) of a program

    - by OK
    The SCons User Guide tells about the usage of Multiple Construction Environments to build build multiple versions of a single program and gives the following example: opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') o = opt.Object('foo-opt', 'foo.c') opt.Program(o) d = dbg.Object('foo-dbg', 'foo.c') dbg.Program(d) Instead of manually assigning different names to the objects compiled with different environments, VariantDir() / variant_dir sounds like a better solution... But if I place the Program() builder inside the SConscript: Import('env') env.Program('foo.c') How can I export different environments to the same SConscript file? opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') SConscript('SConscript', 'opt', variant_dir='release') #'opt' --> 'env'??? SConscript('SConscript', 'dbg', variant_dir='debug') #'dbg' --> 'env'??? Unfortunately the discussion in the SCons Wiki does not bring more insight to this topic. Thanks for your input!

    Read the article

  • How does a portable Thread Specific Storage Mechanism's Naming Scheme Generate Thread Relative Uniqu

    - by Hassan Syed
    A portable thread specific storage reference/identity mechanism, of which boost/thread/tss.hpp is an instance, needs a way to generate a unique keys for itself. This key is unique in the scope of a thread, and is subsequently used to retrieve the object it references. This mechanism is used in code written in a thread neutral manner. Since boost is a portable example of this concept, how specifically does such a mechanism work ?

    Read the article

  • Apples new section 3.3.1

    - by ML
    With Apple making changes to section 3.3.1 on the iPhone dev agreement can one stillness libraries like boost in their apps? I want to use Boost in my iPad app...

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • python object to native c++ pointer

    - by Lodle
    Im toying around with the idea to use python as an embedded scripting language for a project im working on and have got most things working. However i cant seem to be able to convert a python extended object back into a native c++ pointer. So this is my class: class CGEGameModeBase { public: virtual void FunctionCall()=0; virtual const char* StringReturn()=0; }; class CGEPYGameMode : public CGEGameModeBase, public boost::python::wrapper<CGEPYGameMode> { public: virtual void FunctionCall() { if (override f = this->get_override("FunctionCall")) f(); } virtual const char* StringReturn() { if (override f = this->get_override("StringReturn")) return f(); return "FAILED TO CALL"; } }; Boost wrapping: BOOST_PYTHON_MODULE(GEGameMode) { class_<CGEGameModeBase, boost::noncopyable>("CGEGameModeBase", no_init); class_<CGEPYGameMode, bases<CGEGameModeBase> >("CGEPYGameMode", no_init) .def("FunctionCall", &CGEPYGameMode::FunctionCall) .def("StringReturn", &CGEPYGameMode::StringReturn); } and the python code: import GEGameMode def Ident(): return "Alpha" def NewGamePlay(): return "NewAlpha" def NewAlpha(): import GEGameMode import GEUtil class Alpha(GEGameMode.CGEPYGameMode): def __init__(self): print "Made new Alpha!" def FunctionCall(self): GEUtil.Msg("This is function test Alpha!") def StringReturn(self): return "This is return test Alpha!" return Alpha() Now i can call the first to functions fine by doing this: const char* ident = extract< const char* >( GetLocalDict()["Ident"]() ); const char* newgameplay = extract< const char* >( GetLocalDict()["NewGamePlay"]() ); printf("Loading Script: %s\n", ident); CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( GetLocalDict()[newgameplay]() ); However when i try and convert the Alpha class back to its base class (last line above) i get an boost error: TypeError: No registered converter was able to extract a C++ pointer to type class CGEPYGameMode from this Python object of type Alpha I have done alot of searching on the net but cant work out how to convert the Alpha object into its base class pointer. I could leave it as an object but rather have it as a pointer so some non python aware code can use it. Any ideas?

    Read the article

  • How to pass function reference into arguments

    - by Ockonal
    Hi, I'm using boost::function for making function-references: typedef boost::function<void (SomeClass &handle)> Ref; someFunc(Ref &pointer) {/*...*/} void Foo(SomeClass &handle) {/*...*/} What is the best way to pass Foo into the someFunc? I tried something like: someFunc(Ref(Foo));

    Read the article

  • Cast object as OleVariant in Delphi

    - by Alan Clark
    Is there a way to pass a wrap and unwrap a TObject descendent in an OleVariant? I am trying to pass a TObject across automation objects. I know it's not a good idea but I don't have a good alternative. Something like this: function GetMyObjAsVariant; var MyObj: TMyObj; begin MyObj := TMyObj.Create; result := OleVariant(MyObj); end; Which would be used by a client as var MyObj: TMyObj; begin MyObj := GetMyObjAsVariant as TMyObj; end; This fails to compile, returning E2015 Operator not applicable to this operand type.

    Read the article

  • intrusive_ptr: Why isn't a common base class provided?

    - by Jon
    intrusive_ptr requires intrusive_ptr_add_ref and intrusive_ptr_release to be defined. Why isn't a base class provided which will do this? There is an example here: http://lists.boost.org/Archives/boost/2004/06/66957.php, but the poster says "I don't necessarily think this is a good idea". Why not?

    Read the article

  • As a programmer what single discovery has given you the greatest boost in productivity?

    - by ChrisInCambo
    This question has been inspired by my recent discovery/adoption of distributed version control. I started using it (mercurial) just because I liked the idea of still being able to make commits at times when I couldn't connect to the central server. I never expected it would give me a large boost in general productivity, but a pleasant side effect I discovered was that making a new clone every time I started a new task and giving that clone a descriptive folder name is extremely effective at keeping me on task resulting is a noticeable productivity increase. So as a programmer what single discovery has given you the greatest boost in productivity? Extra respect for answers which involve tools or practices that aren't so obvious from the outside!

    Read the article

  • One nginx rules for lots of subdomain

    - by komase
    I have lots of subdomain in a server. Every subdomain has its own Drupal boost rules, like in below codes: server { server_name subdomain1.website.com; location / { root /var/www/html/subdomain/subdomain1.website.com; index index.php; set $boost ""; set $boost_query "_"; if ( $request_method = GET ) { set $boost G; } if ($http_cookie !~ "DRUPAL_UID") { set $boost "${boost}D"; } if ($query_string = "") { set $boost "${boost}Q"; } if ( -f $document_root/cache/normal/$host$request_uri$boost_query.html ) { set $boost "${boost}F"; } if ($boost = GDQF){ rewrite ^.*$ /cache/normal/$host/$request_uri$boost_query.html break; } if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/subdomain/subdomain1.website.com$fastcgi_script_name; include fastcgi_params; } } I adding all subdomain rules manually from time to time. The size of ngin.conf has become too big. So, I need one nginx rules which do: subdomain1.website.com pointing to /var/www/html/subdomain/subdomain1.website.com subdomain2.website.com pointing to /var/www/html/subdomain/subdomain2.website.com subdomain3.website.com pointing to /var/www/html/subdomain/subdomain3.website.com ...and so on (So that no more adding rules for subdomain .website.com I need in the future.)

    Read the article

  • Template with constant expression: error C2975 with VC++2008

    - by Arman
    Hello, I am trying to use elements of meta programming, but hit the wall with the first trial. I would like to have a comparator structure which can be used as following: intersect_by<ID>(L1.data, L2.data, "By ID: "); intersect_by<IDf>(L1.data, L2.data, "By IDf: "); Where: struct ID{};// Tag used for original IDs struct IDf{};// Tag used for the file position //following Boost.MultiIndex examples template<typename Tag,typename MultiIndexContainer> void intersect_by( const MultiIndexContainer& L1,const MultiIndexContainer& L2,std::string msg, Tag* =0 /* fixes a MSVC++ 6.0 bug with implicit template function parms / ) { / obtain a reference to the index tagged by Tag */ const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L1_ID_index= get<Tag>(L1); const typename boost::multi_index::index<MultiIndexContainer,Tag>::type& L2_ID_index= get<Tag>(L2); std::set_intersection( L1_ID_index.begin(), L1_ID_index.end(), L2_ID_index.begin(), L2_ID_index.end(), std::inserter(s, s.begin()), strComparator() // Here I get the C2975 error ); } template<int N> struct strComparator; template<> struct strComparator<0>{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.ID struct strComparator<1{ bool operator () (const particleID& id1, const particleID& id2) const { return id1.IDf }; What I am missing? kind regards Arman.

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • How to get at contents of placeholder::_1

    - by sheepsimulator
    I currently have the following code: using boost::bind; typedef boost::signal<void(EventDataItem&)> EventDataItemSignal; class EventDataItem { ... EventDataItemSignal OnTrigger; ... } typedef std::list< shared_ptr<EventDataItem> > DataItemList; typedef std::list<boost::signals::connection> ConnectionList; class MyClass { void OnStart() { DataItemList dilItems; ConnectionList clConns; DataItemList::iterator iterDataItems; for(iterDataItems = dilItems.begin(); iterDataItems != dilItems.end(); iterDataItems++) { // Create Connections from Triggers clConns.push_back((*iterDataItems)->OnTrigger.connect( bind(&MyClass::OnEventTrigger, this))); } } void OnEventTrigger() { // ... Do stuff on Trigger... } } I'd like to change MyClass::OnStart to use std::transform to achieve the same thing: void MyClass::OnStart() { DataItemList dilItems; ConnectionList clConns; // Resize connection list to match number of data items clConns.resize(dilItems.size()); // Build connection list from Items // note: errors on the placeholder _1->OnTrigger std::transform(dilItems.begin(), dilItems.end(), clConns.begin(), bind(&EventDataItemSignal::connect, _1->OnTrigger, bind(&MyClass::Stuff, this))); } However, my hiccup is _1-OnTrigger. How can I reference OnTrigger from placeholder _1?

    Read the article

  • Why is creating a ring buffer shared by different processes so hard (in C++), what I am doing wrong?

    - by recipriversexclusion
    I am being especially dense about this but it seems I'm missing an important, basic point or something, since what I want to do should be common: I need to create a fixed-size ring buffer object from a manager process (Process M). This object has write() and read() methods to write/read from the buffer. The read/write methods will be called by independent processes (Process R and W) I have implemented the buffer, SharedBuffer<T&>, it allocates buffer slots in SHM using boost::interprocess and works perfectly within a single process. I have read the answers to this question and that one on SO, as well as asked my own, but I'm still in the dark about how to have different processes access methods from a common object. The Boost doc has an example of creating a vector in SHM, which is very similar to what I want, but I want to instantiate my own class. My current options are: Use placement new, as suggested by Charles B. to my question; however, he cautions that it's not a good idea to put non-POD objects in SHM. But my class needs the read/write methods, how can I handle those? Add an allocator to my class definition, e.g. have SharedBuffer<T&, Alloc> and proceed similarly to the vector example given in boost. This sounds really complicated. Change SharedBuffer to a POD class, i.e. get rid of all the methods. But then how to synchronize reading and writing between processes? What am I missing? Fixed-length ring buffers are very common, so either this problem has a solution or else I'm doing something wrong.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >