Search Results

Search found 1726 results on 70 pages for 'boost signals2'.

Page 62/70 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • C++ smart pointers: sharing pointers vs. sharing data

    - by Eli Bendersky
    In this insightful article, one of the Qt programmers tries to explain the different kinds of smart pointers Qt implements. In the beginning, he makes a distinction between sharing data and sharing the pointers themselves: First, let’s get one thing straight: there’s a difference between sharing pointers and sharing data. When you share pointers, the value of the pointer and its lifetime is protected by the smart pointer class. In other words, the pointer is the invariant. However, the object that the pointer is pointing to is completely outside its control. We don’t know if the object is copiable or not, if it’s assignable or not. Now, sharing of data involves the smart pointer class knowing something about the data being shared. In fact, the whole point is that the data is being shared and we don’t care how. The fact that pointers are being used to share the data is irrelevant at this point. For example, you don’t really care how Qt tool classes are implicitly shared, do you? What matters to you is that they are shared (thus reducing memory consumption) and that they work as if they weren’t. Frankly, I just don't undersand this explanation. There was a clarification plea in the article comments, but I didn't find the author's explanation sufficient. If you do understand this, please explain. What is this distinction, and how are other shared pointer classes (i.e. from boost or the new C++ standards) fit into this taxonomy? Thanks in advance

    Read the article

  • Lucene - querying with long strings

    - by Mikos
    I have an index, with a field "Affiliation", some example values are: "Stanford University School of Medicine, Palo Alto, CA USA", "Institute of Neurobiology, School of Medicine, Stanford University, Palo Alto, CA", "School of Medicine, Harvard University, Boston MA", "Brigham & Women's, Harvard University School of Medicine, Boston, MA" "Harvard University, Cambridge MA" and so on... (the bottom-line being the affiliations are written in multiple ways with no apparent consistency) I query the index on the affiliation field using say "School of Medicine, Stanford University, Palo Alto, CA" (with QueryParser) to find all Stanford related documents, I get a lot of false +ves, presumably because of the presence of School of Medicine etc. etc. (note: I cannot use Phrase query because of variability in the way affiliation is constructed) I have tried the following: Use a SpanNearQuery by splitting the search phrase with a whitespace (here I get no results!) Tried boosting (using ^) by splitting with the comma and boosting the last parts such as "Palo Alto CA" with a much higher boost than the initial phrases. Here I still get lots of false +ves. Any suggestions on how to approach this? If SpanNearQuery the way to go, Any ideas on why I get 0 results?

    Read the article

  • C++ design question on traversing binary trees

    - by user231536
    I have a binary tree T which I would like to copy to another tree. Suppose I have a visit method that gets evaluated at every node: struct visit { virtual void operator() (node* n)=0; }; and I have a visitor algorithm void visitor(node* t, visit& v) { //do a preorder traversal using stack or recursion if (!t) return; v(t); visitor(t->left, v); visitor(t->right, v); } I have 2 questions: I settled on using the functor based approach because I see that boost graph does this (vertex visitors). Also I tend to repeat the same code to traverse the tree and do different things at each node. Is this a good design to get rid of duplicated code? What other alternative designs are there? How do I use this to create a new binary tree from an existing one? I can keep a stack on the visit functor if I want, but it gets tied to the algorithm in visitor. How would I incorporate postorder traversals here ? Another functor class?

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

  • Build System with Recursive Dependency Aggregation

    - by radman
    Hi, I recently began setting up my own library and projects using a cross platform build system (generates make files, visual studio solutions/projects etc on demand) and I have run into a problem that has likely been solved already. The issue that I have run into is this: When an application has a dependency that also has dependencies then the application being linked must link the dependency and also all of its sub-dependencies. This proceeds in a recursive fashion e.g. (For arguments sake lets assume that we are dealing exclusively with static libraries.) TopLevelApp.exe dependency_A dependency_A-1 dependency_A-2 dependency_B dependency_B-1 dependency_B-2 So in this example TopLevelApp will need to link dependency_A, dependency_A-1, dependency_A-2 etc and the same for B. I think the responsibility of remembering all of these manually in the target application is pretty sub optimal. There is also the issue of ensuring the same version of the dependency is used across all targets (assuming that some targets depend on the same things, e.g. boost). Now linking all of the libraries is required and there is no way of getting around it. What I am looking for is a build system that manages this for you. So all you have to do is specify that you depend on something and the appropriate dependencies of that library will be pulled in automatically. The build system I have been looking at is premake premake4 which doesn't handle this (as far as I can determine). Does anyone know of a build system that does handle this? and if there isn't then why not?

    Read the article

  • ajax delay load UserControl asp.net

    - by user196202
    regarding ajax delay load of usercontrols (or any controls) on Post at Encosia.com : http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/ I tried to implement it , but I noticed that it can be done only for simple controls or UserControls that Have simple asp.net controls (or html tags) . But when it involved with advanced dynamic ajax control (like ajaxControlToolkit or Telerik controls) that have javascripts inside them This method of injecting the html code to the .InnerHtml property of div tag (for example) IS NOT WORKING , and I red about it that The browser need to load the script on load and after that it won't iterperate the scripts injectd via .InnerHtml. So I attached here example of delay load project (from encosia.com by dave ward) with my modification (look at DefaultPopup.aspx and beforePopup.aspx and AfterPopup.aspx) Which I modified the RssReader to show listview with popup items (which is implemented via ACT HoverMenuExtender ) So in the regular way the popup items are shown right , but on the delay load which is done by creating virtual page for rendering the html and injecting it to .InnerHtml property – This ISN'T WORKING. So my question is : is there a way to do delay loading for controls which include scripts lik ACT and Telerik and others? And for the ajax templates – if I need to inject advanced control to the page – how I do it with your approach? Thanks very much (I can't attach here files so everyone please ask me by mail ([email protected]) and i'll send it to him. ) Zahi Kramer

    Read the article

  • Ways std::stringstream can set fail/bad bit?

    - by Evan Teran
    A common piece of code I use for simple string splitting looks like this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } return elems; } Someone mentioned that this will silently "swallow" errors occurring in std::getline. And of course I agree that's the case. But it occurred to me, what could possibly go wrong here in practice that I would need to worry about. basically it all boils down to this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } if(ss.fail()) { // *** How did we get here!? *** } return elems; } A stringstream is backed by a string, so we don't have to worry about any of the issues associated with reading from a file. There is no type conversion going on here since getline simply reads until it sees a newline or EOF. So we can't get any of the errors that something like boost::lexical_cast has to worry about. I simply can't think of something besides failing to allocate enough memory that could go wrong, but that'll just throw a std::bad_alloc well before the std::getline even takes place. What am I missing?

    Read the article

  • Maximum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong ?? public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • Is Annotation in Javascript? If not, how to switch between debug/productive modes in declarative way

    - by Michael Mao
    Hi all: This is but a curious question. I cannot find any useful links from Google so it might be better to ask the gurus here. The point is: is there a way to make "annotation" in javascript source code so that all code snippets for testing purpose can be 'filtered out' when project is deployed from test field into the real environment? I know in Java, C# or some other languages, you can assign an annotation just above the function name, such as : // it is good to remove the annoying warning messages @SuppressWarnings("unchecked") public class Tester extends TestingPackage { ... } Basically I've got a lot of testing code that prints out something into FireBug console. I don't wanna manually "comment out" them because the guy that is going to maintain the code might not be aware of all the testing functions, so he/she might just miss one function and the whole thing can be brought down to its knees. One other thing, we might use a minimizer to "shrink" the source code into "human unreadable" code and boost up performance (just like jQuery.min), so trying to match testing section out of the mess is not possible for plain human eyes in the future. Any suggestion is much appreciated.

    Read the article

  • Passenger error message I can't figure out

    - by Sam Kong
    Hi, I am testing Rails 3 on DreamHost which just installed Rails 3. I created a simple controller and it failed. Browser shows 500 error (Internal Server Error) and the log shows the following message. Could not find i18n-0.5.0 in any of the sources Try running `bundle install`. *** Exception EOFError in spawn manager (Unexpected end-of-file detected.) (process 17951): from /dh/passenger/lib/phusion_passenger/utils.rb:306:in `unmarshal_and_raise_errors' from /dh/passenger/lib/phusion_passenger/rack/application_spawner.rb:71:in `spawn_application' from /dh/passenger/lib/phusion_passenger/rack/application_spawner.rb:41:in `spawn_application' from /dh/passenger/lib/phusion_passenger/spawn_manager.rb:159:in `spawn_application' from /dh/passenger/lib/phusion_passenger/spawn_manager.rb:287:in `handle_spawn_application' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:352:in `__send__' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:352:in `main_loop' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' from /dh/passenger/bin/passenger-spawn-server:61 [ pid=13245 file=ext/apache2/Hooks.cpp:727 time=2010-12-24 12:13:38.287 ]: Unexpected error in mod_passenger: Cannot spawn application '/home/cp_rails3/sites/rails3.codepremise.com': The spawn server has exited unexpectedly. Backtrace: in 'virtual boost::shared_ptr<Passenger::Application::Session> Passenger::ApplicationPoolServer::Client::get(const Passenger::PoolOptions&)' (ApplicationPoolServer.h:471) in 'int Hooks::handleRequest(request_rec*)' (Hooks.cpp:523) It runs fine in console (app.get "url") and also ok with "rails server". What's wrong? Thanks. Sam

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • Cross-platform and language (de)serialization

    - by fwgx
    I'm looking for a way to serialize a bunch of C++ structs in the most convenient way so that the serialization is portable across C++ and Java (at a minimum) and across 32bit/64bit, big/little endian platforms. The structures to be serialized just contain data, i.e. they're pure data objects with no state or behavior. The idea being that we serialize the structs into an octet blob that we can store in a database "generically" and be read out later on. Thus avoiding changing the database whenever a struct changes and also avoiding assigning each data member to a field - i.e. we only want one table to hold everything "generically" as a binary blob. This should make less work for developers and require less changes when structures change. I've looked at boost.serialize but don't think there's a way to enable compatibility with Java. And likewise for inheriting Serializable in Java. If there is a way to do it by starting with an IDL file that would be best as we already have IDL files that describe the structures. Cheers in advance!

    Read the article

  • Thread safe lazy contruction of a singleton in C++

    - by pauldoo
    Is there a way to implement a singleton object in C++ that is: Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: Use static initialisation (as opposed to dynamic initialisation) of a POD static varible, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.

    Read the article

  • segfault during __cxa_allocate_exception in SWIG wrapped library

    - by lefticus
    While developing a SWIG wrapped C++ library for Ruby, we came across an unexplained crash during exception handling inside the C++ code. I'm not sure of the specific circumstances to recreate the issue, but it happened first during a call to std::uncaught_exception, then after a some code changes, moved to __cxa_allocate_exception during exception construction. Neither GDB nor valgrind provided any insight into the cause of the crash. I've found several references to similar problems, including: http://wiki.fifengine.de/Segfault_in_cxa_allocate_exception http://forums.fifengine.de/index.php?topic=30.0 http://code.google.com/p/osgswig/issues/detail?id=17 https://bugs.launchpad.net/ubuntu/+source/libavg/+bug/241808 The overriding theme seems to be a combination of circumstances: A C application is linked to more than one C++ library More than one version of libstdc++ was used during compilation Generally the second version of C++ used comes from a binary-only implementation of libGL The problem does not occur when linking your library with a C++ application, only with a C application The "solution" is to explicitly link your library with libstdc++ and possibly also with libGL, forcing the order of linking. After trying many combinations with my code, the only solution that I found that works is the LD_PRELOAD="libGL.so libstdc++.so.6" ruby scriptname option. That is, none of the compile-time linking solutions made any difference. My understanding of the issue is that the C++ runtime is not being properly initialized. By forcing the order of linking you bootstrap the initialization process and it works. The problem occurs only with C applications calling C++ libraries because the C application is not itself linking to libstdc++ and is not initializing the C++ runtime. Because using SWIG (or boost::python) is a common way of calling a C++ library from a C application, that is why SWIG often comes up when researching the problem. Is anyone out there able to give more insight into this problem? Is there an actual solution or do only workarounds exist? Thanks.

    Read the article

  • Lucene document Boosting

    - by athreyar
    Hello, I am having problem with lucene boosting, Iam trying to boost a particular document which matches with the (firstname)field specified I have posted the part of the codeenter code hereprivate static Document createDoc(String lucDescription,String primaryk,String specialString){ Document doc = new Document(); doc.add(new Field("lucDescription",lucDescription, Field.Store.NO, Field.Index.TOKENIZED)); doc.add(new Field("primarykey",primaryk,Field.Store.YES,Field.Index.NO)); doc.add(new Field("specialDescription",specialString, Field.Store.NO, Field.Index.UN_TOKENIZED)); doc.setBoost ((float)(0.00001)); if (specialString.equals("chris")) doc.setBoost ((float)(100000.1)); return doc; } why is this not working?enter code herepublic static String dbSearch(String searchString){ List pkList = new ArrayList(); String conCat="("; try{ String querystr = searchString; Query query = new QueryParser("lucDescription", new StandardAnalyzer()).parse(querystr); IndexSearcher searchIndex = new IndexSearcher("/home/athreya/docsIndexFile"); // Index of the User table-- /home/araghu/aditya/indexFile. Hits hits = searchIndex.search(query); System.out.println("Found " + hits.length() + " hits."); for(int iterator=0;iterator Thank you in advance Athreya

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

  • C++ Returning Multiple Items

    - by Travis Parks
    I am designing a class in C++ that extracts URLs from an HTML page. I am using Boost's Regex library to do the heavy lifting for me. I started designing a class and realized that I didn't want to tie down how the URLs are stored. One option would be to accept a std::vector<Url> by reference and just call push_back on it. I'd like to avoid forcing consumers of my class to use std::vector. So, I created a member template that took a destination iterator. It looks like this: template <typename TForwardIterator, typename TOutputIterator> TOutputIterator UrlExtractor::get_urls( TForwardIterator begin, TForwardIterator end, TOutputIterator dest); I feel like I am overcomplicating things. I like to write fairly generic code in C++, and I struggle to lock down my interfaces. But then I get into these predicaments where I am trying to templatize everything. At this point, someone reading the code doesn't realize that TForwardIterator is iterating over a std::string. In my particular situation, I am wondering if being this generic is a good thing. At what point do you start making code more explicit? Is there a standard approach to getting values out of a function generically?

    Read the article

  • non-copyable objects and value initialization: g++ vs msvc

    - by R Samuel Klatchko
    I'm seeing some different behavior between g++ and msvc around value initializing non-copyable objects. Consider a class that is non-copyable: class noncopyable_base { public: noncopyable_base() {} private: noncopyable_base(const noncopyable_base &); noncopyable_base &operator=(const noncopyable_base &); }; class noncopyable : private noncopyable_base { public: noncopyable() : x_(0) {} noncopyable(int x) : x_(x) {} private: int x_; }; and a template that uses value initialization so that the value will get a known value even when the type is POD: template <class T> void doit() { T t = T(); ... } and trying to use those together: doit<noncopyable>(); This works fine on msvc as of VC++ 9.0 but fails on every version of g++ I tested this with (including version 4.5.0) because the copy constructor is private. Two questions: Which behavior is standards compliant? Any suggestion of how to work around this in gcc (and to be clear, changing that to T t; is not an acceptable solution as this breaks POD types). P.S. I see the same problem with boost::noncopyable.

    Read the article

  • Truncate C++ string fields generated by ostringstream, iomanip:setw

    - by Ian Durkan
    In C++ I need string representations of integers with leading zeroes, where the representation has 8 digits and no more than 8 digits, truncating digits on the right side if necessary. I thought I could do this using just ostringstream and iomanip.setw(), like this: int num_1 = 3000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_1; cout << "field: " << out_target.str() << " vs input: " << num_1 << endl; The output here is: field: 00003000 vs input: 3000 Very nice! However if I try a bigger number, setw lets the output grow beyond 8 characters: int num_2 = 2000000000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_2; cout << "field: " << out_target.str() << " vs input: " << num_2 << endl; out_target.str(""); output: field: 2000000000 vs input: 2000000000 The desired output is "20000000". There's nothing stopping me from using a second operation to take only the first 8 characters, but is field truncation truly missing from iomanip? Would the Boost formatting do what I need in one step?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >