Search Results

Search found 2238 results on 90 pages for 'boost variant'.

Page 81/90 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • How to get encoding from MAPI message with PR_BODY_A tag (windows mobile)?

    - by SadSido
    Hi, everyone! I am developing a program, that handles incoming e-mail and sms through windows-mobile MAPI. The code basically looks like that: ulBodyProp = PR_BODY_A; hr = piMessage->OpenProperty(ulBodyProp, NULL, STGM_READ, 0, (LPUNKNOWN*)&piStream); if (hr == S_OK) { // ... get body size in bytes ... STATSTG statstg; piStream->Stat(&statstg, 0); ULONG cbBody = statstg.cbSize.LowPart; // ... allocate memory for the buffer ... BYTE* pszBodyInBytes = NULL; boost::scoped_array<BYTE> szBodyInBytesPtr(pszBodyInBytes = new BYTE[cbBody+2]); // ... read body into the pszBodyInBytes ... } That works and I have a message body. The problem is that this body is multibyte encoded and I need to return a Unicode string. I guess, I have to use ::MultiByteToWideChar() function, but how can I guess, what codepage should I apply? Using CP_UTF8 is naive, because it can simply be not in UTF8. Using CP_ACP works, well, sometimes, but sometimes does not. So, my question is: how can I retrieve the information about message codepage. Does MAPI provide any functions for it? Or is there a way to decode multibyte string, other than MultiByteToWideChar()? Thanks!

    Read the article

  • Mmap and structure

    - by blid..pl
    I'm working some code including communication between processes, using semaphores. I made structure like this: typedef struct container { sem_t resource, mutex; int counter; } container; and use in that way (in main app and the same in subordinate processes) container *memory; shm_unlink("MYSHM"); //just in case fd = shm_open("MYSHM", O_RDWR|O_CREAT|O_EXCL, 0); if(fd == -1) { printf("Error"); exit(EXIT_FAILURE); } memory = mmap(NULL, sizeof(container), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); ftruncate(fd, sizeof(container)); Everything is fine when I use one of the sem_ functions, but when I try to do something like memory->counter = 5; It doesn't work. Probably I got something wrong with pointers, but I tried almost everything and nothing seems to work. Maybe there's a better way to share variables, structures etc between processes ? Unfortunately I'm not allowed to use boost or something similiar, the code is for educational purposes and I'm intentend to keep as simple as it's possible.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • So many technologies to choose from. Where does the beginner start?

    - by Sahat
    WPF Silverlight Windows phone 7 w/ Silverlight iPhone OS w/ Objective-C Cocoa w/ Objective-C ASP.NET Android Facebook FBML HTML5 I will be graduating with B.S. in Computer Science soon and have to decide what do I want to learn from this list. I believe it's better to focus on one thing, master it and build up a portfolio to enhance my resume. Bachelor's Degree with no experience, no portfolio won't do me any good. It won't get me a job by itself. I need to have something that will greatly boost my resume. What would it be? iPhone development? ASP.NET web development? Facebook development? Or completely something else that I haven't listed? I understand it's natural for silverlight developers to say "Learn Silverlight", and iPhone developers say "Learn iPhone SDK and Objective-C". So please try to give a constructive, non-biased, non-objective opinion on which technology should I focus on. Please don't close the topic for "subjective/argumentative" reasons. I am just looking for some guidance.

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • lambda traits inconsistency across C++0x compilers

    - by Sumant
    I observed some inconsistency between two compilers (g++ 4.5, VS2010 RC) in the way they match lambdas with partial specializations of class templates. I was trying to implement something like boost::function_types for lambdas to extract type traits. Check this for more details. In g++ 4.5, the type of the operator() of a lambda appears to be like that of a free standing function (R (*)(...)) whereas in VS2010 RC, it appears to be like that of a member function (R (C::*)(...)). So the question is are compiler writers free to interpret any way they want? If not, which compiler is correct? See the details below. template <typename T> struct function_traits : function_traits<decltype(&T::operator())> { // This generic template is instantiated on both the compilers as expected. }; template <typename R, typename C> struct function_traits<R (C::*)() const> { // inherits from this one on VS2010 RC typedef R result_type; }; template <typename R> struct function_traits<R (*)()> { // // inherits from this one g++ 4.5 typedef R result_type; }; int main(void) { auto lambda = []{}; function_traits<decltype(lambda)>::result_type *r; // void * } This program compiles on both g++ 4.5 and VS2010 but the function_traits that are instantiated are different as noted in the code.

    Read the article

  • Pros and Cons of using SqlCommand Prepare in C#?

    - by MadBoy
    When i was reading books to learn C# (might be some old Visual Studio 2005 books) I've encountered advice to always use SqlCommand.Prepare everytime I execute SQL call (whether its' a SELECT/UPDATE or INSERT on SQL SERVER 2005/2008) and I pass parameters to it. But is it really so? Should it be done every time? Or just sometimes? Does it matter whether it's one parameter being passed or five or twenty? What boost should it give if any? Would it be noticeable at all (I've been using SqlCommand.Prepare here and skipped it there and never had any problems or noticeable differences). For the sake of the question this is my usual code that I use, but this is more of a general question. public static decimal pobierzBenchmarkKolejny(string varPortfelID, DateTime data, decimal varBenchmarkPoprzedni, decimal varStopaOdniesienia) { const string preparedCommand = @"SELECT [dbo].[ufn_BenchmarkKolejny](@varPortfelID, @data, @varBenchmarkPoprzedni, @varStopaOdniesienia) AS 'Benchmark'"; using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetailsDZP)) //if (varConnection != null) { using (var sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Prepare(); sqlQuery.Parameters.AddWithValue("@varPortfelID", varPortfelID); sqlQuery.Parameters.AddWithValue("@varStopaOdniesienia", varStopaOdniesienia); sqlQuery.Parameters.AddWithValue("@data", data); sqlQuery.Parameters.AddWithValue("@varBenchmarkPoprzedni", varBenchmarkPoprzedni); using (var sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { //sqlQueryResult["Benchmark"]; } } } }

    Read the article

  • template specialization of a auto_ptr<T>

    - by Chris Kaminski
    Maybe I'm overcomplicating things, but then again, I do sort of like clean interfaces. Let's say I want a specialization of auto_ptr for an fstream - I want a default fstream for the generic case, but allow a replacement pointer? tempate <> class auto_ptr<fstream> static fstream myfStream; fstream* ptr; public: auto_ptr() { // set ptr to &myfStream; } reset(fstream* newPtr) { // free old ptr if not the static one. ptr = newPtr }; } Would you consider something different or more elegant? And how would you keep something like the above from propagating outside this particular compilation unit? [The actual template is a boost::scoped_ptr.] EDIT: It's a contrived example. Ignore the fstream - it's about providing a default instance of object for an auto_ptr. I may not want to provide a specialized instance, but would like to keep the auto_ptr semantics for this static default object. class UserClass { public: auto_ptr<fstream> ptr; UserClass() { } } I may not provide an dynamic object at construction time - I still want it to have a meaningful default. Since I'm not looking at ownership-transfer semantics, it really shouldn't matter that my pointer class is pointing to a statically allocated object, no?

    Read the article

  • Parse string to create a list of element

    - by Nick
    I have a string like this: "\r color=\"red\" name=\"Jon\" \t\n depth=\"8.26\" " And I want to parse this string and create a std::list of this object: class data { std::string name; std::string value; }; Where for example: name = color value = red What is the fastest way? I can use boost. EDIT: This is what i've tried: vector<string> tokens; split(tokens, str, is_any_of(" \t\f\v\n\r")); if(tokens.size() > 1) { list<data> attr; for_each(tokens.begin(), tokens.end(), [&attr](const string& token) { if(token.empty() || !contains(token, "=")) return; vector<string> tokens; split(tokens, token, is_any_of("=")); erase_all(tokens[1], "\""); attr.push_back(data(tokens[0], tokens[1])); } ); } But it does not work if there are spaces inside " ": like color="red 1".

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • Design: How to declare a specialized memory handler class

    - by Michael Dorgan
    On an embedded type system, I have created a Small Object Allocator that piggy backs on top of a standard memory allocation system. This allocator is a Boost::simple_segregated_storage< class and it does exactly what I need - O(1) alloc/dealloc time on small objects at the cost of a touch of internal fragmentation. My question is how best to declare it. Right now, it's scope static declared in our mem code module, which is probably fine, but it feels a bit exposed there and is also now linked to that module forever. Normally, I declare it as a monostate or a singleton, but this uses the dynamic memory allocator (where this is located.) Furthermore, our dynamic memory allocator is being initialized and used before static object initialization occurs on our system (as again, the memory manager is pretty much the most fundamental component of an engine.) To get around this catch 22, I added an extra 'if the small memory allocator exists' to see if the small object allocator exists yet. That if that now must be run on every small object allocation. In the scheme of things, this is nearly negligable, but it still bothers me. So the question is, is there a better way to declare this portion of the memory manager that helps decouple it from the memory module and perhaps not costing that extra isinitialized() if statement? If this method uses dynamic memory, please explain how to get around lack of initialization of the small object portion of the manager.

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • C++: Efficiently adding integers to strings

    - by Shinka
    I know how to add integers to strings, but I'm not sure I'm doing it in an efficient matters. I have a class where I often have to return a string plus an integer (a different integer each time), in Java I would do something like public class MyClass { final static String S = "MYSTRING"; private int id = 0; public String getString() { return S + (id++); } } But in C++ I have to do; class MyClass { private: std::string S; // For some reason I can't do const std::string S = "MYSTRING"; int id; public: MyClass() { S = "MYSTRING"; id = 0; } std::string getString() { std::ostringstream oss; oss << S << id++; return oss.str(); } } An additional constraint: I don't want (in fact, in can't) use Boost or any other librairies, I'll have to work with the standard library. So the thing is; the code works, but in C++ I have to create a bunch of ostringstream objects, so it seems inefficient. To be fair, perhaps Java do the same and I just don't notice it, I say it's inefficient mostly because I know very little about strings. Is there a more efficient way to do this ?

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • What -W values in gcc correspond to which actual warnings?

    - by SebastianK
    Preamble: I know, disabling warnings is not a good idea. Anyway, I have a technical question about this. Using GCC 3.3.6, I get the following warning: choosing ... over ... because conversion sequence for the argument is better. Now, I want to disable this warning as described in gcc warning options by providing an argument like -Wno-theNameOfTheWarning But I don't know the name of the warning. How can I find out the name of the option that disables this warning? I am not able to fix the warning, because it occurs in a header of an external library that can not be changed. It is in boost serialization (rx(s, count)): template<class Archive, class Container, class InputFunction, class R> inline void load_collection(Archive & ar, Container &s) { s.clear(); // retrieve number of elements collection_size_type count; unsigned int item_version; ar >> BOOST_SERIALIZATION_NVP(count); if(3 < ar.get_library_version()) ar >> BOOST_SERIALIZATION_NVP(item_version); else item_version = 0; R rx; rx(s, count); std::size_t c = count; InputFunction ifunc; while(c-- > 0){ ifunc(ar, s, item_version); } } I have already tried #pragma GCC system_header but this had no effect. Using -isystem instead of -I also does not work. The general question remains is: I know the text of the warning message. But I do not know the correlation to the gcc warning options.

    Read the article

  • GNU C++ how to check when -std=c++0x is in effect?

    - by TerryP
    My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that. /usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture! Using my system compiler (g++ 4.2 default dialect): #include <tr1/foo> using std::tr1::foo; Using newer (4.5) versions of the compiler with -std=c++0x: #include <foo> using std::foo; Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled? Something like this is what I'm looking for: #ifdef __CXX0X_MODE__ #endif but I have not found anything in the manual or off the web. At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • Putting all methods in class definition

    - by Amnon
    When I use the pimpl idiom, is it a good idea to put all the methods definitions inside the class definition? For example: // in A.h class A { class impl; boost::scoped_ptr<impl> pimpl; public: A(); int foo(); } // in A.cpp class A::impl { // method defined in class int foo() { return 42; } // as opposed to only declaring the method, and defining elsewhere: float bar(); }; A::A() : pimpl(new impl) { } int A::foo() { return pimpl->foo(); } As far as I know, the only problems with putting a method definition inside a class definition is that (1) the implementation is visible in files that include the class definition, and (2) the compiler may make the method inline. These are not problems in this case since the class is defined in a private file, and inlining has no effect since the methods are called in only one place. The advantage of putting the definition inside the class is that you don't have to repeat the method signature. So, is this OK? Are there any other issues to be aware of?

    Read the article

  • C++ MACRO that will execute a block of code and a certain command after that block.

    - by Poni
    void main() { int xyz = 123; // original value { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends int original_value = xyz; // should be 123 } void main() { int xyz = 123; // original value MACRO_NAME(xyz = 123) // the macro takes the code code that should be executed at the end of the block. { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends << how to make the macro execute the "xyz = 123" statement? int original_value = xyz; // should be 123 } Only the first main() works. I think the comments explain the issue. It doesn't need to be a macro but to me it just sounds like a classical "macro-needed" case. By the way, there's the BOOST_FOREACH macro/library and I think it does the exact same thing I'm trying to achieve but it's too complex for me to find the essence of what I need. From its introductory manual page, an example: #include <string> #include <iostream> #include <boost/foreach.hpp> int main() { std::string hello( "Hello, world!" ); BOOST_FOREACH( char ch, hello ) { std::cout << ch; } return 0; }

    Read the article

  • Rewriting jQuery to plain old javascript - are the performance gains worth it?

    - by Swader
    Since jQuery is an incredibly easy and banal library, I've developed a rather complex project fairly quickly with it. The entire interface is jQuery based, and memory is cleaned regularly to maintain optimum performance. Everything works very well in Firefox, and exceptionally so in Chrome (other browsers are of no concern for me as this is not a commercial or publicly available product). What I'm wondering now is - since pure plain old javascript is really not a complicated language to master, would it be performance enhancing to rewrite the whole thing in plain old JS, and if so, how much of a boost would you expect to get from it? If the answers prove positive enough, I'll go ahead and do it, run a benchmark and report back with the precise findings. Cheers Edit: Thanks guys, valuable insight. The purpose was not to "re-invent the wheel" - it was just for experience and personal improvement. Just because something exists, doesn't mean you shouldn't explore it into greater detail, know how it works or try to recreate it. This is the same reason I seldom use frameworks, I would much rather use my own code and iron it out and gain massive experience doing it, than start off by using someone else's code, regardless of how ironed out it is. Anyway, won't be doing it, thanks for saving me the effort :)

    Read the article

  • Compiler turning a string& into a basic_string<>&

    - by Shtong
    Hello I'm coming back to C++ after long years spent on other technologies and i'm stuck on some weird behavior when calling some methods taking std::string as parameters : An example of call : LocalNodeConfiguration *LocalNodeConfiguration::ReadFromFile(std::string & path) { // ... throw configuration_file_error(string("Configuration file empty"), path); // ... } When I compile I get this (I cropped file names for readability) : /usr/bin/g++ -g -I/home/shtong/Dev/OmegaNoc/build -I/usr/share/include/boost-1.41.0 -o CMakeFiles/OmegaNocInternals.dir/configuration/localNodeConfiguration.cxx.o -c /home/shtong/Dev/OmegaNoc/source/configuration/localNodeConfiguration.cxx .../localNodeConfiguration.cxx: In static member function ‘static OmegaNoc::LocalNodeConfiguration* OmegaNoc::LocalNodeConfiguration::ReadFromFile(std::string&)’: .../localNodeConfiguration.cxx:72: error: no matching function for call to ‘OmegaNoc::configuration_file_error::configuration_file_error(std::string, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&)’ .../configurationManager.hxx:25: note: candidates are: OmegaNoc::configuration_file_error::configuration_file_error(std::string&, std::string&) .../configurationManager.hxx:22: note: OmegaNoc::configuration_file_error::configuration_file_error(const OmegaNoc::configuration_file_error&) So as I understand it, the compiler is considering that my path parameter turned into a basic_string at some point, thus not finding the constructor overload I want to use. But I don't really get why this transformation happened. Some search on the net suggested me to use g++ but I was already using it. So any other advice would be appreciated :) Thanks

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >