Search Results

Search found 1933 results on 78 pages for 'boost tuples'.

Page 30/78 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Good functions and techniques for dealing with haskell tuples?

    - by toofarsideways
    I've been doing a lot of work with tuples and lists of tuples recently and I've been wondering if I'm being sensible. Things feel awkward and clunky which for me signals that I'm doing something wrong. For example I've written three convenience functions for getting the first, second and third value in a tuple of 3 values. Is there a better way I'm missing? Are there more general functions that allow you to compose and manipulate tuple data? Here are some things I am trying to do that feel should be generalisable. Extracting values: Do I need to create a version of fst,snd,etc... for tuples of size two, three, four and five, etc...? fst3(x,_,_) = x fst4(x,_,_,_) = x Manipulating values: Can you increment the last value in a list of pairs and then use that same function to increment the last value in a list of triples? Zipping and Unzipping values: There is a zip and a zip3. Do I also need a zip4? or is there some way of creating a general zip function? Sorry if this seems subjective, I honestly don't know if this is even possible or if I'm wasting my time writing 3 extra functions every time I need a general solution. Thank you for any help you can give!

    Read the article

  • Speed boost to adjacency matrix

    - by samoz
    I currently have an algorithm that operates on an adjacency matrix of size n by m. In my algorithm, I need to zero out entire rows or columns at a time. My implementation is currently O(m) or O(n) depending on if it's a column or row. Is there any way to zero out a column or row in O(1) time?

    Read the article

  • Cython Speed Boost vs. Usability

    - by zubin71
    I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization". Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself. Here is a silly test i tried, #!/usr/bin/python # test.pyx def test(value): for i in xrange(value): i**2 if(i==1000000): print i test(10000001) $ time python test.pyx real 0m16.774s user 0m16.745s sys 0m0.024s $ time cython test.pyx real 0m0.513s user 0m0.196s sys 0m0.052s Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?

    Read the article

  • Boost Shared Pointer: Simultaneous Read Access Across Multiple Threads

    - by Nikhil
    I have a thread A which allocates memory and assigns it to a shared pointer. Then this thread spawns 3 other threads X, Y and Z and passes a copy of the shared pointer to each. When X, Y and Z go out of scope, the memory is freed. But is there a possibility that 2 threads X, Y go out of scope at the exact same point in time and there is a race condition on reference count so instead of decrementing it by 2, it only gets decremented once. So, now the reference count newer drops to 0, so there is a memory leak. Note that, X, Y and Z are only reading the memory. Not writing or resetting the shared pointer. To cut a long story short, can there be a race condition on the reference count and can that lead to memory leaks?

    Read the article

  • Splitting Code into Headers/Source files

    - by cam
    I took the following code from the examples page on Asio class tcp_connection : public boost::enable_shared_from_this<tcp_connection> { public: typedef boost::shared_ptr<tcp_connection> pointer; static pointer create(boost::asio::io_service& io_service) { return pointer(new tcp_connection(io_service)); } tcp::socket& socket() { return socket_; } void start() { message_ = make_daytime_string(); boost::asio::async_write(socket_, boost::asio::buffer(message_), boost::bind(&tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } private: tcp_connection(boost::asio::io_service& io_service) : socket_(io_service) { } void handle_write(const boost::system::error_code& /*error*/, size_t /*bytes_transferred*/) { } tcp::socket socket_; std::string message_; }; I'm relatively new to C++ (from a C# background), and from what I understand, most people would split this into header and source files (declaration/implementation, respectively). Is there any reason I can't just leave it in the header file if I'm going to use it across many source files? If so, are there any tools that will automatically convert it to declaration/implementation for me? Can someone show me what this would look like split into header/source file for an example (or just part of it, anyway)? I get confused around weird stuff like thistypedef boost::shared_ptr<tcp_connection> pointer; Do I include this in the header or the source? Same with tcp::socket& socket() I've read many tutorials, but this has always been something that has confused me about C++.

    Read the article

  • Unable to run OpenMPI across more than two machines

    - by rcollyer
    When attempting to run the first example in the boost::mpi tutorial, I was unable to run across more than two machines. Specifically, this seemed to run fine: mpirun -hostfile hostnames -np 4 boost1 with each hostname in hostnames as <node_name> slots=2 max_slots=2. But, when I increase the number of processes to 5, it just hangs. I have decreased the number of slots/max_slots to 1 with the same result when I exceed 2 machines. On the nodes, this shows up in the job list: <user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \ -mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \ 388497408.0;tcp://<node_ip>:48823 Additionally, when I kill it, I get this message: node2- daemon did not report back when launched node3- daemon did not report back when launched The cluster is set up with the mpi and boost libs accessible on an NFS mounted drive. Am I running into a deadlock with NFS? Or, is something else going on?

    Read the article

  • Cross platform unicode path handling

    - by Matt Joiner
    I'm using boost::filesystem for cross-platform path manipulation, but this breaks down when calls need to be made down into interfaces I don't control that won't accept UTF-8. For example when using the Windows API, I need to convert to UTF-16, and then call the wide-string version of whatever function I was about to call, and then convert any output back to UTF-8. While the wpath, and other w* forms of many of the boost::filesystem functions help keep sanity, are there any suggestions for how best to handle this conversion to wide-string forms where needed, while maintaining consistency in my own code?

    Read the article

  • Remove from a std::set<shared_ptr<T>> by T*

    - by Autopulated
    I have a set of shared pointers: std::set<boost::shared_ptr<T>> set; And a pointer: T* p; I would like to efficiently remove the element of set equal to p, but I can't do this with any of the members of set, or any of the standard algorithms, since T* is a completely different type to boost::shared_ptr<T>. A few approaches I can think of are: somehow constructing a new shared_ptr from the pointer that won't take ownership of the pointed to memory (ideal solution, but I can't see how to do this) wrapping / re-implementing shared_ptr so that I can do the above just doing my own binary search over the set Help!

    Read the article

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • List of functions references

    - by Ockonal
    Hello, I'm using boost::function for making references to the functions. Can I make a list of references? For example: boost::function<bool (Entity &handle)> behaviorRef; And I need in a list of such pointers. For example: std::vector<behaviorRef> listPointers; Of course it's wrong code due to behaviorRef isn't a type. So the question is: how can I store a list of pointers for the function?

    Read the article

  • LLVM Clang 5.0 explicit in copy-initialization error

    - by kevzettler
    I'm trying to compile an open source project on OSX that has only been tested on Linux. $: g++ -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-da I'm trying to compile with the following command line options g++ -MMD -Wall -std=c++0x -stdlib=libc++ -Wno-sign-compare -Wno-unused-variable -ftemplate-depth=1024 -I /usr/local/Cellar/boost/1.55.0/include/boost/ -g -O3 -c level.cpp -o obj-opt/level.o I am seeing several errors that look like this: ./square.h:39:70: error: chosen constructor is explicit in copy-initialization int strength = 0, double flamability = 0, map<SquareType, int> constructions = {}, bool ticking = false); The project states the following are requirements for the Linux setup. How can I confirm I'm making that? gcc-4.8.2 git libboost 1.5+ with libboost-serialize libsfml-dev 2+ (Ubuntu ppa that contains libsfml 2: ) freeglut-dev libglew-dev

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • How would I use for_each to delete every value in an STL map?

    - by stusmith
    Suppose I have a STL map where the values are pointers, and I want to delete them all. How would I represent the following code, but making use of std::for_each? I'm happy for solutions to use Boost. for( stdext::hash_map<int, Foo *>::iterator ir = myMap.begin(); ir != myMap.end(); ++ir ) { delete ir->second; // delete all the (Foo *) values. } (I've found Boost's checked_delete, but I'm not sure how to apply that to the pair<int, Foo *> that the iterator represents). (Also, for the purposes of this question, ignore the fact that storing raw pointers that need deleting in an STL container isn't very sensible).

    Read the article

  • Why does my mic boost automatically go to 100 on every boot?

    - by Ben
    When my computer turns on, it automatically sets the "mic boost" sound setting to 100. This causes a loud static sound in my speakers. I can manually go to alsamixer and turn the mic boost down manually, but I would prefer it if I didn't have to do this every time I turn the computer on. I've tried running sudo alsactl store after fixing the settings, and this does save them, but I have to run sudo alsactl restore to restore the settings. This means that I have to manually fix the sound every time I start the computer anyway, so it isn't really a fix. I tried putting sudo alsactl restore in my startup programs, but that didn't seem to fix anything. I'm running Ubuntu 12.04, but I started having this problem before upgrading from 11.10. I'm using a Sony Vaio laptop. I'm not really sure what made it start; it seemed like I just started having the problem randomly one day. Any help would be appreciated! Edit: here is the output from running amixer: http://paste.ubuntu.com/1060080/

    Read the article

  • Whats the best data-structure for storing 2-tuple (a, b) which support adding, deleting tuples and c

    - by bhups
    Hi So here is my problem. I want to store 2-tuple (key, val) and want to perform following operations: - keys are strings and values are Integers - multiple keys can have same value - adding new tuples - updating any key with new value (any new value or updated value is greater than the previous one, like timestamps) - fetching all the keys with values less than or greater than given value - deleting tuples. Hash seems to be the obvious choice for updating the key's value but then lookups via values will be going to take longer (O(n)). The other option is balanced binary search tree with key and value switched. So now lookups via values will be fast (O(lg(n))) but updating a key will take (O(n)). So is there any data-structure which can be used to address these issues? Thanks.

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of UNIQUE N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of DISSIMILAR N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] # Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world

    - by Noah Roberts
    So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t being a different type than Qt's wchar_t. This doesn't become an issue of course until you try to use something like std::wstring in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...> but they only contain definitions expecting std::wstring<__wchar_t...> (or whatever). So I did some web searching and ran into this link: http://bugreports.qt.nokia.com/browse/QTBUG-6345 Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code. Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort. At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse. From where I'm sitting I have three options: Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year. Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t. Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from Zimbabfuckegypt starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen. So, what's my question... Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway? What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage? Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready? How do other Qt developers deal with this mess?

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

  • Looking to reimplement build toolchain from bash/grep/sed/awk/(auto)make/configure to something more

    - by wash
    I currently maintain a few boxes that house a loosely related cornucopia of coding projects, databases and repositories (ranging from a homebrew *nix distro to my class notes), maintained by myself and a few equally pasty-skinned nerdy friends (all of said cornucopia is stored in SVN). The vast majority of our code is in C/C++/assembly (a few utilities are in python/perl/php, we're not big java fans), compiled in gcc. Our build toolchain typically consists of a hodgepodge of make, bash, grep, sed and awk. Recent discovery of a Makefile nearly as long as the program it builds (as well as everyone's general anxiety with my cryptic sed and awking) has motivated me to seek a less painful build system. Currently, the strongest candidate I've come across is Boost Build/Bjam as a replacement for GNU make and python as a replacement for our build-related bash scripts. Are there any other C/C++/asm build systems out there worth looking into? I've browsed through a number of make alternatives, but I haven't found any that are developed by names I know aside from Boost's. (I should note that an ability to easily extract information from svn commandline tools such as svnversion is important, as well as enough flexibility to configure for builds of asm projects as easily as c/c++ projects)

    Read the article

  • C++ meta-splat function

    - by aaa
    hello. Is there an existing function (in boost mpl or fusion) to splat meta-vector to variadic template arguments? for example: splat<vector<T1, T2, ...>, function>::type same as function<T1, T2, ...> my search have not found one, and I do not want to reinvent one if it already exists. edit: after some tinkering, apparently it's next to impossible to accomplish this in general way, as it would require declaring full template template parameter list for all possible cases. only reasonable solution is to use macro: #define splat(name, function) \ template<class T, ...> struct name; \ template<class T> \ struct name<T,typename boost::enable_if_c< \ result_of::size<T>::value == 1>::type> { \ typedef function< \ typename result_of::value_at_c<T,0>::type \ > type; \ }; Oh well. thank you

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >