Search Results

Search found 2443 results on 98 pages for 'boost uuid'.

Page 60/98 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Redirect C++ std::clog to syslog on Unix

    - by kriss
    I work on Unix on a C++ program that send messages to syslog. The current code uses the syslog system call that works like printf. Now I would prefer to use a stream for that purpose instead, typically the built-in std::clog. But clog merely redirect output to stderr, not to syslog and that is useless for me as I also use stderr and stdout for other purposes. I've seen in another answer that it's quite easy to redirect it to a file using rdbuf() but I see no way to apply that method to call syslog as openlog does not return a file handler I could use to tie a stream on it. Is there another method to do that ? (looks pretty basic for unix programming) ? Edit: I'm looking for a solution that does not use external library. What @Chris is proposing could be a good start but is still a bit vague to become the accepted answer. Edit: using Boost.IOStreams is OK as my project already use Boost anyway. Linking with external library is possible but is also a concern as it's GPL code. Dependencies are also a burden as they may conflict with other components, not be available on my Linux distribution, introduce third-party bugs, etc. If this is the only solution I may consider completely avoiding streams... (a pity).

    Read the article

  • How to install PySide v0.3.1 on Mac OS X?

    - by ivo
    I'm trying to install PySide v0.3.1 in Mac OS X, for Qt development in python. As a pre-requisite, I have installed CMake and the Qt SDK. I have gone through the documentation and come up with the following installation script: export PYSIDE_BASE_DIR="<my_dir>" export APIEXTRACTOR_DIR="$PYSIDE_BASE_DIR/apiextractor-0.5.1" export GENERATORRUNNER_DIR="$PYSIDE_BASE_DIR/generatorrunner-0.4.2" export SHIBOKEN_DIR="$PYSIDE_BASE_DIR/shiboken-0.3.1" export PYSIDE_DIR="$PYSIDE_BASE_DIR/pyside-qt4.6+0.3.1" export PYSIDE_TOOLS_DIR="$PYSIDE_BASE_DIR/pyside-tools-0.1.3" pushd . cd $APIEXTRACTOR_DIR cmake . cd $GENERATORRUNNER_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR . cd $SHIBOKEN_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR -DGeneratorRunner_DIR=$GENERATORRUNNER_DIR . cd $PYSIDE_DIR cmake -DShiboken_DIR=$SHIBOKEN_DIR/libshiboken -DGENERATOR=$GENERATORRUNNER_DIR . cd $PYSIDE_TOOLS_DIR cmake . popd Now, I don't know if this installation script is ok, but apparently everything works fine. Each component (apiextractor, generatorrunner, shiboken, pyside-qt and pyside-tools) gets compiled into its own directory. The problem is that I don't quite understand how PySide gets into the system's python environment. In fact, when I start a python shell, I cannot import PySide: >>> import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide Note: I am aware of the Installing PySide - OSX question, but that question is not relevant anymore, because it is about a specific a dependency on the Boost libraries, but with version 0.3.0 PySide moved from a Boost based source code to a CPython one.

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • C# style Action<T>, Func<T,T>, etc in C++0x

    - by Austin Hyde
    C# has generic function types such as Action<T> or Func<T,U,V,...> With the advent of C++0x and the ability to have template typedef's and variadic template parameters, it seems this should be possible. The obvious solution to me would be this: template <typename T> using Action<T> = void (*)(T); however, this does not accommodate for functors or C++0x lambdas, and beyond that, does not compile with the error "expected unqualified-id before 'using'" My next attempt was to perhaps use boost::function: template <typename T> using Action<T> = boost::function<void (T)>; This doesn't compile either, for the same reason. My only other idea would be STL style template arguments: template <typename T, typename Action> void foo(T value, Action f) { f(value); } But this doesn't provide a strongly typed solution, and is only relevant inside the templated function. Now, I will be the first to admit that I am not the C++ wiz I prefer to think I am, so it's very possible there is an obvious solution I'm not seeing. Is it possible to have C# style generic function types in C++?

    Read the article

  • Why does this Object wonk out & get deleted ?

    - by brainydexter
    Stepping through the debugger, the BBox object is okay at the entry of the function, but as soon as it enters the function, the vfptr object points to 0xccccc. I don't get it. What is causing this ? Why is there a virtual table reference in there when the object is not derived from other class. (Though, it resides in GameObject from which my Player class inherits and I retrieve the BBox from within player. But, why does the BBox have the reference ? Shouldn't it be player who should be maintained in that reference ?) For 1; some code for reference: A. I retrieve the bounding box from player. This returns a bounding box as expected. I then send its address to GetGridCells. const BoundingBox& l_Bbox = l_pPlayer-GetBoundingBox(); boost::unordered_set < Cell*, CellPHash & l_GridCells = GetGridCells ( &l_Bbox ); B. This is where a_pBoundingBox goes crazy and gets that garbage value. boost::unordered_set< Cell*, CellPHash CollisionMgr::GetGridCells(const BoundingBox *a_pBoundingBox) { I think the following code is also pertinent, so I'm sticking this in here anyways: const BoundingBox& Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } const BoundingSphere& Player::GetBoundingSphere(void) { BoundingSphere& l_BSphere = m_pGeomMesh-m_BoundingSphere; l_BSphere.m_Center = GetPosition(); return l_BSphere; } // BoundingBox Constructor BoundingBox(const BoundingSphere* a_pBoundingSphere); Can anyone please give me some idea as to why this is happening? Also, if you want me to post more code, please do let me know. Thanks!

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Trouble with __VA_ARGS__

    - by Noah Roberts
    C++ preprocessor __VA_ARGS__ number of arguments The accepted answer there doesn't work for me. I've tried with MSVC++ 10 and g++ 3.4.5. I also crunched the example down into something smaller and started trying to get some information printed out to me in the error: template < typename T > struct print; #include <boost/mpl/vector_c.hpp> #define RSEQ_N 10,9,8,7,6,5,4,3,2,1,0 #define ARG_N(_1,_2,_3,_4,_5,_6,_7,_8,_9,_10,N,...) N #define ARG_N_(...) ARG_N(__VA_ARGS__) #define XXX 5,RSEQ_N #include <iostream> int main() { print< boost::mpl::vector_c<int, ARG_N_( XXX ) > > g; // ARG_N doesn't work either. } It appears to me that the argument for ARG_N ends up being 'XXX' instead of 5,RSEQ_N and much less 5,10,...,0. The error output of g++ more specifically says that only one argument is supplied. Having trouble believing that the answer would be proposed and then accepted when it totally fails to work, so what am I doing wrong? Why is XXX being interpreted as the argument and not being expanded? In my own messing around everything works fine until I try to pass off VA_ARGS to a macro containing some names followed by ... like so: #define WTF(X,Y,...) X , Y , __VA_ARGS__ #define WOT(...) WTF(__VA_ARGS__) WOT(52,2,5,2,2) I've tried both with and without () in the various macros that take no input.

    Read the article

  • Is a "factory" method the right pattern?

    - by jdt141
    Hey all - So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor... /* * class1 and class 2 derive from the same superclass */ class Container { public: boost::shared_ptr<ComposedClass1> class1; boost::shared_ptr<ComposedClass2> class2; private: ... } /* * Constructor - builds the objects that we need in this container. */ Container::Container(some params) { class1.reset(new ComposedClass1(...)); class2.reset(new ComposedClass2(...)); } What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!

    Read the article

  • How do tools like Hiphop for PHP deal with heterogenous arrays?

    - by Derek Thurn
    I think HipHop for PHP is an interesting tool. It essentially converts PHP code into C++ code. Cross compiling in this manner seems like a great idea, but I have to wonder, how do they overcome the fundamental differences between the two type systems? One specific example of my general question is heterogeneous data structures. Statically typed languages don't tend to let you put arbitrary types into an array or other container because they need to be able to figure out the types on the other end. If I have a PHP array like this: $mixedBag = array("cat", 42, 8.5, false); How can this be represented in C++ code? One option would be to use void pointers (or the superior version, boost::any), but then you need to cast when you take stuff back out of the array... and I'm not at all convinced that the type inferencer can always figure out what to cast to at the other end. A better option, perhaps, would be something more like a union (or boost::variant), but then you need to enumerate all possible types at compile time... maybe possible, but certainly messy since arrays can contain arbitrarily complex entities. Does anyone know how HipHop and similar tools which go from a dynamic typing discipline to a static discipline handle these types of problems?

    Read the article

  • Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?

    - by Dariusz Walczak
    Hello, I'm completely new at C# and NUnit. In Boost.Test there is a family of BOOST_*_THROW macros. In Python's test module there is TestCase.assertRaises method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use ExpectedExceptionAttribute. Why should I prefer ExpectedExceptionAttribute over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use ExpectedExceptionAttribute, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised System.IndexOutOfRangeException. How would you fix following code to compile and work as expected? [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } Edit: Thanks for your answers. Today, I've found an It's the Tests blog entry where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.

    Read the article

  • Copying a foreign Subversion repository to keep under dependencies

    - by Jonathan Sternberg
    I want to keep dependencies for my project in our own repository, that way we have consistent libraries for the entire team to work with. For example, I want our project to use the Boost libraries. I've seen this done in the past with putting dependencies under a "vendor" or "dependencies" folder. But I still want to be able to update these dependencies. If a new feature appears in a library and we need it, I want to just be able to update that repository within our own repository. I don't want to have to recopy it and put it under version control again. I'd also like for us to have the ability to change dependencies if a small change is needed without stopping us from ever updating the library. I want the ability to do something like 'svn cp', then be able to 'svn merge' in the future. I just tried this with the boost trunk, but I'm not able to get any history using 'svn log' on the copy I made. How do I do this? What is usually done for large projects with dependencies?

    Read the article

  • is back_insert_iterator<> safe to be passed by value?

    - by afriza
    I have a code that looks something like: struct Data { int value; }; class A { public: typedef std::deque<boost::shared_ptr<Data> > TList; std::back_insert_iterator<TList> GetInserter() { return std::back_inserter(m_List); } private: TList m_List; }; class AA { boost::scoped_ptr<A> m_a; public: AA() : m_a(new A()) {} std::back_insert_iterator<A::TList> GetDataInserter() { return m_a->GetInserter(); } }; class B { template<class OutIt> CopyInterestingDataTo(OutIt outIt) { // loop and check conditions for interesting data // for every `it` in a Container<Data*> // create a copy and store it for( ... it = ..; .. ; ..) if (...) { *outIt = OutIt::container_type::value_type(new Data(**it)); outIt++; // dummy } } void func() { AA aa; CopyInterestingDataTo(aa.GetInserter()); // aa.m_a->m_List is empty! } }; The problem is that A::m_List is always empty even after CopyInterestingDataTo() is called. However, if I debug and step into CopyInterestingDataTo(), the iterator does store the supposedly inserted data!

    Read the article

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • C++: parsing with simple regular expression or shoud I use sscanf?

    - by Helltone
    I need to parse a string like func1(arg1, arg2); func2(arg3, arg4);. It's not a very complex parsing problem, so I would prefer to avoid resorting to flex/bison or similar utilities. My first approch was to try to use POSIX C regcomp/regexec or Boost implementation of C++ std::regex. I wrote the following regular expression, which does not work (I'll explain why further on). "^" "[ ;\t\n]*" "(" // (1) identifier "[a-zA-Z_][a-zA-Z0-9_]*" ")" "[ \t\n]*" "(" // (2) non-marking "\[" "(" // (3) non-marking "[ \t]*" "(" // (4..n-1) argument "[a-zA-Z0-9_]+" ")" "[ \t\n]*" "," ")*" "[ \t\n]*" "(" // (n) last argument "[a-zA-Z0-9_]+" ")" "]" ")?" "[ \t\n]*" ";" Note that the group 1 captures the identifier and groups 4..n-1 are intended to capture arguments except the last, which is captured by group n. When I apply this regex to, say func(arg1, arg2, arg3) the result I get is an array {func, arg2, arg3}. This is wrong because arg1 is not in it! The problem is that in the standard regex libraries, submarkings only capture the last match. In other words, if you have for instance the regex "((a*|b*))*" applied on "babb", the results of the inner match will be bb and all previous captures will have been forgotten. Another thing that annoys me here is that in case of error there is no way to know which character was not recognized as these functions provide very little information about the state of the parser when the input is rejected. So I don't know if I'm missing something here... In this case should I use sscanf or similar instead? Note that I prefer to use C/C++ standard libraries (and maybe boost).

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • C++ iterators, default initialization and what to use as an uninitialized sentinel.

    - by Hassan Syed
    The Context I have a custom template container class put together from a map and vector. The map resolves a string to an ordinal, and the vector resolves an ordinal (only an initial string to ordinal lookup is done, future references are to the vector) to the entry. The entries are modified intrusively to contain a a bool "assigned" and an iterator_type which is a const_iterator to the container class's map. My container class will use RCF's serialization code (which models boost::serialization) to serialize my container classes to nodes in a network. Serializing iterator's is not possible, or a can of worms, and I can easily regenerate them onces the vectors and maps are serialized on the remote site. The Question I need to default initialize, and be able to test that the iterator has not been assigned to (if it is assigned it is valid, if not it is invalid). Since map iterators are not invalidated upon operations performed on it (unless of course items are removed :D) am I to assume that map<x,y>::end() is a valid sentinel (regardless of the state of the map -- i.e., it could be empty) to initialize to ? I will always have access to the parent map, I'm just unsure wheather end() is the same as the map contents change. I don't want to use another level of indirection (--i.e., boost::optional) to achieve my goal, I'd rather forego compiler checks to correct logic, but it would be nice if I didn't need to. Misc This question exists, but most of its content seems non-sense. Assigning a NULL to an iterator is invalid according to g++ and clang++. This is another similar question, but it focuses on the common use-cases of iterators, which generally tends to be using the iterator to iterate, ofcourse in this use-case the state of the container isn't meant to change whilst iteration is going on.

    Read the article

  • C++ Declarative Parsing Serialization

    - by Martin York
    Looking at Java and C# they manage to do some wicked processing based on special languaged based anotation (forgive me if that is the incorrect name). In C++ we have two problems with this: 1) There is no way to annotate a class with type information that is accessable at runtime. 2) Parsing the source to generate stuff is way to complex. But I was thinking that this could be done with some template meta-programming to achieve the same basic affect as anotations (still just thinking about it). Like char_traits that are specialised for the different types an xml_traits template could be used in a declaritive way. This traits class could be used to define how a class is serialised/deserialized by specializing the traits for the class you are trying to serialize. Example Thoughs: template<typename T> struct XML_traits { typedef XML_Empty Children; }; template<> struct XML_traits<Car> { typedef boost::mpl::vector<Body,Wheels,Engine> Children; }; template<typename T> std::ostream& Serialize(T const&) { // my template foo is not that strong. // but somthing like this. boost::mpl::for_each<typename XML_Traits<T>::Children,Serialize>(data); } template<> std::ostream& Serialize<XML_Empty>(T const&) { /* Do Nothing */ } My question is: Has anybody seen any projects/decumentation (not just XML) out there that uses techniques like this (template meta-programming) to emulate the concept of annotation used in languges like Java and C# that can then be used in code generation (to effectively automate the task by using a declaritive style). At this point in my research I am looking for more reading material and examples.

    Read the article

  • C++ CRTP(template pattern) question

    - by aaa
    following piece of code does not compile, the problem is in T::rank not be inaccessible (I think) or uninitialized in parent template. Can you tell me exactly what the problem is? is passing rank explicitly the only way? or is there a way to query tensor class directly? Thank you #include <boost/utility/enable_if.hpp> template<class T, // size_t N, class enable = void> struct tensor_operator; // template<class T, size_t N> template<class T> struct tensor_operator<T, typename boost::enable_if_c< T::rank == 4>::type > { tensor_operator(T &tensor) : tensor_(tensor) {} T& operator()(int i,int j,int k,int l) { return tensor_.layout.element_at(i, j, k, l); } T &tensor_; }; template<size_t N, typename T = double> // struct tensor : tensor_operator<tensor<N,T>, N> { struct tensor : tensor_operator<tensor<N,T> > { static const size_t rank = N; }; I know the workaround, however am interested in mechanics of template instantiation for self-education

    Read the article

  • Link compatibility between C++ and D

    - by Caspin
    D easily interfaces with C. D just as easily interfaces with C++, but (and it's a big but) the C++ needs to be extremely trivial. The code cannot use: namespaces templates multiple inheritance mix virtual with non-virtual methods more? I completely understand the inheritance restriction. The rest however, feel like artificial limitations. Now I don't want to be able to use std::vector<T> directly, but I would really like to be able to link with std::vector<int> as an externed template. The C++ interfacing page has this particularly depressing comment. D templates have little in common with C++ templates, and it is very unlikely that any sort of reasonable method could be found to express C++ templates in a link-compatible way with D. This means that the C++ STL, and C++ Boost, likely will never be accessible from D. Admittedly I'll probably never need std::vector while coding in D, but I'd love to use QT or boost. So what's the deal. Why is it so hard to express non-trivial C++ classes in D? Would it not be worth it to add some special annotations or something to express at least namespaces?

    Read the article

  • Creating Binary Block from struct

    - by MOnsDaR
    I hope the title is describing the problem, i'll change it if anyone has a better idea. I'm storing information in a struct like this: struct AnyStruct { AnyStruct : testInt(20), testDouble(100.01), testBool1(true), testBool2(false), testBool3(true), testChar('x') {} int testInt; double testDouble; bool testBool1; bool testBool2; bool testBool3; char testChar; std::vector<char> getBinaryBlock() { //how to build that? } } The struct should be sent via network in a binary byte-buffer with the following structure: Bit 00- 31: testInt Bit 32- 61: testDouble most significant portion Bit 62- 93: testDouble least significant portion Bit 94: testBool1 Bit 95: testBool2 Bit 96: testBool3 Bit 97-104: testChar According to this definition the resulting std::vector should have a size of 13 bytes (char == byte) My question now is how I can form such a packet out of the different datatypes I've got. I've already read through a lot of pages and found datatypes like std::bitset or boost::dynamic_bitset, but neither seems to solve my problem. I think it is easy to see, that the above code is just an example, the original standard is far more complex and contains more different datatypes. Solving the above example should solve my problems with the complex structures too i think. One last point: The problem should be solved just by using standard, portable language-features of C++ like STL or Boost (

    Read the article

  • SQLite DB open time really long Problem

    - by sxingfeng
    I am using sqlite in c++ windows, And I have a db size about 60M, When I open the sqlite db, It takes about 13 second. sqlite3* mpDB; nRet = sqlite3_open16(szFile, &mpDB); And if I closed my application and reopen it again. It takse only less then 1 second. First, I thought It is because of disk cache. So I preload the 60M db file before sqlite open, and read the file using CFile, However, after preloading, the first time is still very slow. BOOL CQFilePro::PreLoad(const CString& strPath) { boost::shared_array<BYTE> temp = boost::shared_array<BYTE>(new BYTE[PRE_LOAD_BUFFER_LENGTH]); int nReadLength; try { CFile file; if (file.Open(strPath, CFile::modeRead) == FALSE) { return FALSE; } do { nReadLength = file.Read(temp.get(), PRE_LOAD_BUFFER_LENGTH); } while (nReadLength == PRE_LOAD_BUFFER_LENGTH); file.Close(); } catch(...) { } return TRUE; } My question is what is the difference between first open and second open. How can I accelerate the sqlite open-process.

    Read the article

  • C++ CRTP question

    - by aaa
    following piece of code does not compile, the problem is in T::rank not be inaccessible (I think) or uninitialized in parent template. Can you tell me exactly what the problem is? is passing rank explicitly the only way? or is there a way to query tensor class directly? Thank you #include <boost/utility/enable_if.hpp> template<class T, // size_t N, class enable = void> struct tensor_operator; // template<class T, size_t N> template<class T> struct tensor_operator<T, typename boost::enable_if_c< T::rank == 4>::type > { tensor_operator(T &tensor) : tensor_(tensor) {} T& operator()(int i,int j,int k,int l) { return tensor_.layout.element_at(i, j, k, l); } T &tensor_; }; template<size_t N, typename T = double> // struct tensor : tensor_operator<tensor<N,T>, N> { struct tensor : tensor_operator<tensor<N,T> > { static const size_t rank = N; }; I know the workaround, however am interested in mechanics of template instantiation for self-education

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >