Search Results

Search found 1728 results on 70 pages for 'boost bjam'.

Page 44/70 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Meet Matthijs, Dutch Inside Sales Representative for Oracle Direct

    - by Maria Sandu
    Today we would like to share some information around the Dutch Core Technology team in Malaga. Matthijs is one of the team members who decided to relocate from the Netherlands to Malaga to join Oracle Direct two years ago. Matthijs: “For the past two years I have been working as an Oracle Direct Core Technology Inside Sales representative for Named Accounts in the Netherlands, based in Malaga, Spain. In my case, working for the Dutch OD Core Technology team means that I am responsible for the Account Management of Larger companies in the Travel & Transportation and the Manufacturing, Retail & Distribution sector. I work together with the Oracle Field Account Managers and our Field Sales Management in the Netherlands where I am often the main point of contact for customers. This means that I deal with their requests and I manage their various issues, provide solutions and suggestions based on the Oracle Core Technology portfolio. I work on interesting projects with end-customers, making financial proposals and building business cases. It is a very interesting sales environment and for the last two years I improved my skills substantially. This month I will finish my Inside Sales career in Malaga to move to a position within Field Sales in the Netherlands. Oracle Direct has proven to be a great stepping stone for my career. Boost your personal development One of the reasons for joining Oracle was to boost my personal & career development. You can choose from various different trainings to follow all over Europe which enable you to reach both your personal and professional goals. Furthermore, you can decide your own career path and plan the steps necessary to achieve your goal. Many people aim to grow into Field Sales in their native countries, Business Development or Sales Management, but there are many possibilities once you decide to join Oracle. Overall, working at Oracle means working for an international company and one of the worldwide leaders in Enterprise Hardware & Software. Here you get all the tools necessary to develop yourself personally & professionally. Another great advantage of working for Oracle Direct is working from our office in Malaga, Southern Spain where we have over 400 employees from many countries across EMEA. It is a truly international environment! Working and living in Spain gives you an excellent opportunity to learn Spanish and of course enjoy the Spanish lifestyle, cuisine, beaches and much, much more!” Interview day Utrecht If you are inspired by the story of Matthijs and would like to explore the opportunity to join the Technology Sales team for the Dutch market in Malaga, let us know! We will organise an Interview day in the Oracle office in Utrecht on the 18th and 19th of September. We currently have multiple openings in the Core Technology team that focus on selling our Database portfolio in the Dutch market. We are looking for native Dutch speakers with a Bachelors degree, 2-5 years sales experience (ideally in IT) who are willing to relocate to Malaga for at least 2 years! For more information please contact [email protected] or [email protected].

    Read the article

  • Creating a voxel world with 3D arrays using threads

    - by Sean M.
    I am making a voxel game (a bit like Minecraft) in C++(11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C++, I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this: for(int i = 0; i < height; i++) { std::thread th(std::bind(&World::load, this, width, height, depth)); th.join(); } Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried: std::thread t1(std::bind(&World::load, this, w, h1, h2 - 1, d)); std::thread t2(std::bind(&World::load, this, w, h2, h3 - 1, d)); std::thread t3(std::bind(&World::load, this, w, h3, h4 - 1, d)); std::thread t4(std::bind(&World::load, this, w, h4, h - 1, d)); t1.join(); t2.join(); t3.join(); t4.join(); This works in that the world loads about 3-3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one-part: How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO/VBO creation back to the main thread. Fixed! Here is the code for block.h/.cpp, region.h/.cpp, and CVBObject.h/.cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT: Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries.

    Read the article

  • OO Design - polymorphism - how to design for handing streams of different file types

    - by Kache4
    I've little experience with advanced OO practices, and I want to design this properly as an exercise. I'm thinking of implementing the following, and I'm asking if I'm going about this the right way. I have a class PImage that holds the raw data and some information I need for an image file. Its header is currently something like this: #include <boost/filesytem.hpp> #include <vector> namespace fs = boost::filesystem; class PImage { public: PImage(const fs::path& path, const unsigned char* buffer, int bufferLen); const vector<char> data() const { return data_; } const char* rawData() const { return &data_[0]; } /*** other assorted accessors ***/ private: fs::path path_; int width_; int height_; int filesize_; vector<char> data_; } I want to fill the width_ and height_ by looking through the file's header. The trivial/inelegant solution would be to have a lot of messy control flow that identifies the type of image file (.gif, .jpg, .png, etc) and then parse the header accordingly. Instead of using vector<char> data_, I was thinking of having PImage use a class, RawImageStream data_ that inherits from vector<char>. Each type of file I plan to support would then inherit from RawImageStream, e.g. RawGifStream, RawPngStream. Each RawXYZStream would encapsulate the respective header-parsing functions, and PImage would only have to do something like height_ = data_.getHeight();. Am I thinking this through correctly? How would I create the proper RawImageStream subclass for data_ to be in the PImage ctor? Is this where I could use an object factory? Anything I'm forgetting?

    Read the article

  • Lucene: Question of score caculation with PrefixQuery

    - by Keven
    Hi, I meet some problem with the score caculation with a PrefixQuery. To change score of each document, when add document into index, I have used setBoost to change the boost of the document. Then I create PrefixQuery to search, but the result have not been changed according to the boost. It seems setBoost totally doesn't work for a PrefixQuery. Please check my code below: @Test public void testNormsDocBoost() throws Exception { Directory dir = new RAMDirectory(); IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_CURRENT), true, IndexWriter.MaxFieldLength.LIMITED); Document doc1 = new Document(); Field f1 = new Field("contents", "common1", Field.Store.YES, Field.Index.ANALYZED); doc1.add(f1); doc1.setBoost(100); writer.addDocument(doc1); Document doc2 = new Document(); Field f2 = new Field("contents", "common2", Field.Store.YES, Field.Index.ANALYZED); doc2.add(f2); doc2.setBoost(200); writer.addDocument(doc2); Document doc3 = new Document(); Field f3 = new Field("contents", "common3", Field.Store.YES, Field.Index.ANALYZED); doc3.add(f3); doc3.setBoost(300); writer.addDocument(doc3); writer.close(); IndexReader reader = IndexReader.open(dir); IndexSearcher searcher = new IndexSearcher(reader); TopDocs docs = searcher.search(new PrefixQuery(new Term("contents", "common")), 10); for (ScoreDoc doc : docs.scoreDocs) { System.out.println("docid : " + doc.doc + " score : " + doc.score + " " + searcher.doc(doc.doc).get("contents")); } } The output is : docid : 0 score : 1.0 common1 docid : 1 score : 1.0 common2 docid : 2 score : 1.0 common3

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Coupling between controller and view

    - by cheez
    The litmus test for me for a good MVC implementation is how easy it is to swap out the view. I've always done this really badly due to being lazy but now I want to do it right. This is in C++ but it should apply equally to non-desktop applications, if I am to believe the hype. Here is one example: the application controller has to check some URL for existence in the background. It may connect to the "URL available" event (using Boost Signals) as follows: BackgroundUrlCheckerThread(Controller & controller) { // ... signalUrlAvailable.connect( boost::bind(&Controller::urlAvailable,&controller,_1)) } So what does Controller::urlAvailable look like? Here is one possibility: void Controller::urlAvailable(Url url) { if(!view->askUser("URL available, wanna download it?")) return; else // Download the url in a new thread, repeat } This, to me, seems like a gross coupling of the view and the controller. Such a coupling makes it impossible to implement the view when using the web (coroutines aside.) Another possibility: void Controller::urlAvailable(Url url) { urlAvailableSignal(url); // Now, any view interested can do what it wants } I'm partial to the latter but it appears that if I do this there will be: 40 billion such signals. The application controller can get huge for a non-trivial application A very real possibility that a given view accidentally ignores some signals (APIs can inform you at link-time, but signals/slots are run-time) Thanks in advance.

    Read the article

  • Manipulate score/rank on query results from NHibernate.Search

    - by Fernando Figueiredo
    I've been working with NHibernate, NHibernate.Search and Lucene.Net to improve the search engine used on the website I develop. Basically, I use it to search contents of corporations specification documents. This is not to be confused with Lucene's notion of documents: in my case, a specification document (which I'll hereafter call a "specdoc") can contain many pages, and the content of these pages are the ones that are actually indexed (thus, the pages themselves are the ones that fall into Lucene's concept of documents). So, the pages belong to a specdoc, that in turn belong to a corporation (so, a corporation can have many specdocs). I'm using NHibernate.Search "IndexEmbedded" and "ContainedIn" attributes to associate the pages with their specdoc and the specdocs to their corporations, so I can query for terms in specdoc pages and have Lucene/NH.Search return either the pages themselves, the specdocs, or the corporations that match the query on the pages. I can query this way and get ranked results, thus presenting results (that is, corporations, specdocs or pages) by relevance, which is great. But now I need something more. Specifically in the case where I query terms and have NH.Search return the corporations that match, I need to manually/artificially tune the score of some of the results, because there are corporations that I want to show up on the top of the result set - think of "sponsored results". I'm thinking of doing it on my application, maybe creating an entity/database table that contain an association to the corporation entity, and a score boost value. But I don't know how to feed this to Lucene and have it boost the results accordingly at search time. Initially I thought about deriving a Similarity class to do this, but it doesn't look like Similarity can be used to modify result sets at search time. As per this page, it looks like what I need is to mess around with weight or scoring. But the docs are a little superficial in that there are no examples on how to implement a custom scoring, let alone integrate it with NH.Search. So, does anyone know how to do this, or point me to some documentation or working example on how to do something similar? Thanks!

    Read the article

  • Redirect C++ std::clog to syslog on Unix

    - by kriss
    I work on Unix on a C++ program that send messages to syslog. The current code uses the syslog system call that works like printf. Now I would prefer to use a stream for that purpose instead, typically the built-in std::clog. But clog merely redirect output to stderr, not to syslog and that is useless for me as I also use stderr and stdout for other purposes. I've seen in another answer that it's quite easy to redirect it to a file using rdbuf() but I see no way to apply that method to call syslog as openlog does not return a file handler I could use to tie a stream on it. Is there another method to do that ? (looks pretty basic for unix programming) ? Edit: I'm looking for a solution that does not use external library. What @Chris is proposing could be a good start but is still a bit vague to become the accepted answer. Edit: using Boost.IOStreams is OK as my project already use Boost anyway. Linking with external library is possible but is also a concern as it's GPL code. Dependencies are also a burden as they may conflict with other components, not be available on my Linux distribution, introduce third-party bugs, etc. If this is the only solution I may consider completely avoiding streams... (a pity).

    Read the article

  • How to install PySide v0.3.1 on Mac OS X?

    - by ivo
    I'm trying to install PySide v0.3.1 in Mac OS X, for Qt development in python. As a pre-requisite, I have installed CMake and the Qt SDK. I have gone through the documentation and come up with the following installation script: export PYSIDE_BASE_DIR="<my_dir>" export APIEXTRACTOR_DIR="$PYSIDE_BASE_DIR/apiextractor-0.5.1" export GENERATORRUNNER_DIR="$PYSIDE_BASE_DIR/generatorrunner-0.4.2" export SHIBOKEN_DIR="$PYSIDE_BASE_DIR/shiboken-0.3.1" export PYSIDE_DIR="$PYSIDE_BASE_DIR/pyside-qt4.6+0.3.1" export PYSIDE_TOOLS_DIR="$PYSIDE_BASE_DIR/pyside-tools-0.1.3" pushd . cd $APIEXTRACTOR_DIR cmake . cd $GENERATORRUNNER_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR . cd $SHIBOKEN_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR -DGeneratorRunner_DIR=$GENERATORRUNNER_DIR . cd $PYSIDE_DIR cmake -DShiboken_DIR=$SHIBOKEN_DIR/libshiboken -DGENERATOR=$GENERATORRUNNER_DIR . cd $PYSIDE_TOOLS_DIR cmake . popd Now, I don't know if this installation script is ok, but apparently everything works fine. Each component (apiextractor, generatorrunner, shiboken, pyside-qt and pyside-tools) gets compiled into its own directory. The problem is that I don't quite understand how PySide gets into the system's python environment. In fact, when I start a python shell, I cannot import PySide: >>> import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide Note: I am aware of the Installing PySide - OSX question, but that question is not relevant anymore, because it is about a specific a dependency on the Boost libraries, but with version 0.3.0 PySide moved from a Boost based source code to a CPython one.

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • C# style Action<T>, Func<T,T>, etc in C++0x

    - by Austin Hyde
    C# has generic function types such as Action<T> or Func<T,U,V,...> With the advent of C++0x and the ability to have template typedef's and variadic template parameters, it seems this should be possible. The obvious solution to me would be this: template <typename T> using Action<T> = void (*)(T); however, this does not accommodate for functors or C++0x lambdas, and beyond that, does not compile with the error "expected unqualified-id before 'using'" My next attempt was to perhaps use boost::function: template <typename T> using Action<T> = boost::function<void (T)>; This doesn't compile either, for the same reason. My only other idea would be STL style template arguments: template <typename T, typename Action> void foo(T value, Action f) { f(value); } But this doesn't provide a strongly typed solution, and is only relevant inside the templated function. Now, I will be the first to admit that I am not the C++ wiz I prefer to think I am, so it's very possible there is an obvious solution I'm not seeing. Is it possible to have C# style generic function types in C++?

    Read the article

  • Why does this Object wonk out & get deleted ?

    - by brainydexter
    Stepping through the debugger, the BBox object is okay at the entry of the function, but as soon as it enters the function, the vfptr object points to 0xccccc. I don't get it. What is causing this ? Why is there a virtual table reference in there when the object is not derived from other class. (Though, it resides in GameObject from which my Player class inherits and I retrieve the BBox from within player. But, why does the BBox have the reference ? Shouldn't it be player who should be maintained in that reference ?) For 1; some code for reference: A. I retrieve the bounding box from player. This returns a bounding box as expected. I then send its address to GetGridCells. const BoundingBox& l_Bbox = l_pPlayer-GetBoundingBox(); boost::unordered_set < Cell*, CellPHash & l_GridCells = GetGridCells ( &l_Bbox ); B. This is where a_pBoundingBox goes crazy and gets that garbage value. boost::unordered_set< Cell*, CellPHash CollisionMgr::GetGridCells(const BoundingBox *a_pBoundingBox) { I think the following code is also pertinent, so I'm sticking this in here anyways: const BoundingBox& Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } const BoundingSphere& Player::GetBoundingSphere(void) { BoundingSphere& l_BSphere = m_pGeomMesh-m_BoundingSphere; l_BSphere.m_Center = GetPosition(); return l_BSphere; } // BoundingBox Constructor BoundingBox(const BoundingSphere* a_pBoundingSphere); Can anyone please give me some idea as to why this is happening? Also, if you want me to post more code, please do let me know. Thanks!

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Trouble with __VA_ARGS__

    - by Noah Roberts
    C++ preprocessor __VA_ARGS__ number of arguments The accepted answer there doesn't work for me. I've tried with MSVC++ 10 and g++ 3.4.5. I also crunched the example down into something smaller and started trying to get some information printed out to me in the error: template < typename T > struct print; #include <boost/mpl/vector_c.hpp> #define RSEQ_N 10,9,8,7,6,5,4,3,2,1,0 #define ARG_N(_1,_2,_3,_4,_5,_6,_7,_8,_9,_10,N,...) N #define ARG_N_(...) ARG_N(__VA_ARGS__) #define XXX 5,RSEQ_N #include <iostream> int main() { print< boost::mpl::vector_c<int, ARG_N_( XXX ) > > g; // ARG_N doesn't work either. } It appears to me that the argument for ARG_N ends up being 'XXX' instead of 5,RSEQ_N and much less 5,10,...,0. The error output of g++ more specifically says that only one argument is supplied. Having trouble believing that the answer would be proposed and then accepted when it totally fails to work, so what am I doing wrong? Why is XXX being interpreted as the argument and not being expanded? In my own messing around everything works fine until I try to pass off VA_ARGS to a macro containing some names followed by ... like so: #define WTF(X,Y,...) X , Y , __VA_ARGS__ #define WOT(...) WTF(__VA_ARGS__) WOT(52,2,5,2,2) I've tried both with and without () in the various macros that take no input.

    Read the article

  • Is a "factory" method the right pattern?

    - by jdt141
    Hey all - So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor... /* * class1 and class 2 derive from the same superclass */ class Container { public: boost::shared_ptr<ComposedClass1> class1; boost::shared_ptr<ComposedClass2> class2; private: ... } /* * Constructor - builds the objects that we need in this container. */ Container::Container(some params) { class1.reset(new ComposedClass1(...)); class2.reset(new ComposedClass2(...)); } What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!

    Read the article

  • How do tools like Hiphop for PHP deal with heterogenous arrays?

    - by Derek Thurn
    I think HipHop for PHP is an interesting tool. It essentially converts PHP code into C++ code. Cross compiling in this manner seems like a great idea, but I have to wonder, how do they overcome the fundamental differences between the two type systems? One specific example of my general question is heterogeneous data structures. Statically typed languages don't tend to let you put arbitrary types into an array or other container because they need to be able to figure out the types on the other end. If I have a PHP array like this: $mixedBag = array("cat", 42, 8.5, false); How can this be represented in C++ code? One option would be to use void pointers (or the superior version, boost::any), but then you need to cast when you take stuff back out of the array... and I'm not at all convinced that the type inferencer can always figure out what to cast to at the other end. A better option, perhaps, would be something more like a union (or boost::variant), but then you need to enumerate all possible types at compile time... maybe possible, but certainly messy since arrays can contain arbitrarily complex entities. Does anyone know how HipHop and similar tools which go from a dynamic typing discipline to a static discipline handle these types of problems?

    Read the article

  • Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?

    - by Dariusz Walczak
    Hello, I'm completely new at C# and NUnit. In Boost.Test there is a family of BOOST_*_THROW macros. In Python's test module there is TestCase.assertRaises method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use ExpectedExceptionAttribute. Why should I prefer ExpectedExceptionAttribute over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use ExpectedExceptionAttribute, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised System.IndexOutOfRangeException. How would you fix following code to compile and work as expected? [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } Edit: Thanks for your answers. Today, I've found an It's the Tests blog entry where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.

    Read the article

  • Copying a foreign Subversion repository to keep under dependencies

    - by Jonathan Sternberg
    I want to keep dependencies for my project in our own repository, that way we have consistent libraries for the entire team to work with. For example, I want our project to use the Boost libraries. I've seen this done in the past with putting dependencies under a "vendor" or "dependencies" folder. But I still want to be able to update these dependencies. If a new feature appears in a library and we need it, I want to just be able to update that repository within our own repository. I don't want to have to recopy it and put it under version control again. I'd also like for us to have the ability to change dependencies if a small change is needed without stopping us from ever updating the library. I want the ability to do something like 'svn cp', then be able to 'svn merge' in the future. I just tried this with the boost trunk, but I'm not able to get any history using 'svn log' on the copy I made. How do I do this? What is usually done for large projects with dependencies?

    Read the article

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • is back_insert_iterator<> safe to be passed by value?

    - by afriza
    I have a code that looks something like: struct Data { int value; }; class A { public: typedef std::deque<boost::shared_ptr<Data> > TList; std::back_insert_iterator<TList> GetInserter() { return std::back_inserter(m_List); } private: TList m_List; }; class AA { boost::scoped_ptr<A> m_a; public: AA() : m_a(new A()) {} std::back_insert_iterator<A::TList> GetDataInserter() { return m_a->GetInserter(); } }; class B { template<class OutIt> CopyInterestingDataTo(OutIt outIt) { // loop and check conditions for interesting data // for every `it` in a Container<Data*> // create a copy and store it for( ... it = ..; .. ; ..) if (...) { *outIt = OutIt::container_type::value_type(new Data(**it)); outIt++; // dummy } } void func() { AA aa; CopyInterestingDataTo(aa.GetInserter()); // aa.m_a->m_List is empty! } }; The problem is that A::m_List is always empty even after CopyInterestingDataTo() is called. However, if I debug and step into CopyInterestingDataTo(), the iterator does store the supposedly inserted data!

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • C++: parsing with simple regular expression or shoud I use sscanf?

    - by Helltone
    I need to parse a string like func1(arg1, arg2); func2(arg3, arg4);. It's not a very complex parsing problem, so I would prefer to avoid resorting to flex/bison or similar utilities. My first approch was to try to use POSIX C regcomp/regexec or Boost implementation of C++ std::regex. I wrote the following regular expression, which does not work (I'll explain why further on). "^" "[ ;\t\n]*" "(" // (1) identifier "[a-zA-Z_][a-zA-Z0-9_]*" ")" "[ \t\n]*" "(" // (2) non-marking "\[" "(" // (3) non-marking "[ \t]*" "(" // (4..n-1) argument "[a-zA-Z0-9_]+" ")" "[ \t\n]*" "," ")*" "[ \t\n]*" "(" // (n) last argument "[a-zA-Z0-9_]+" ")" "]" ")?" "[ \t\n]*" ";" Note that the group 1 captures the identifier and groups 4..n-1 are intended to capture arguments except the last, which is captured by group n. When I apply this regex to, say func(arg1, arg2, arg3) the result I get is an array {func, arg2, arg3}. This is wrong because arg1 is not in it! The problem is that in the standard regex libraries, submarkings only capture the last match. In other words, if you have for instance the regex "((a*|b*))*" applied on "babb", the results of the inner match will be bb and all previous captures will have been forgotten. Another thing that annoys me here is that in case of error there is no way to know which character was not recognized as these functions provide very little information about the state of the parser when the input is rejected. So I don't know if I'm missing something here... In this case should I use sscanf or similar instead? Note that I prefer to use C/C++ standard libraries (and maybe boost).

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >