Search Results

Search found 2238 results on 90 pages for 'boost variant'.

Page 51/90 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • What's the best language to use for a simple windows 7 dynamic gui desktop app [closed]

    - by Gregor Samsa
    [Note: I hope I am not breaking etiquette, but I originally posted a variant on this question here, but am re-asking here because I am making this now solely a question about programming.] I would like to program of the following simple form: The user can produce X number of resizable frames (analogous to HTML frames). Each frame serves as a simple text editor, which you can type into and save the whole configuration including resized windows and text. The user should be able alternately "freeze" and present the information, and "unfreeze" and edit frames. Thus it will be a cross between a calendar and a text editor. I don't particularly care if it is a web application or not. What languages are ideal for such a setup? I know some C and Python and Html, and am willing to learn others if need be. It seems to me this should be a relatively easy program to make, but I need a little direction before starting. EDIT: My OS is Windows 7.

    Read the article

  • What are the pros and cons of a non-fixed-interval update loop?

    - by akonsu
    I am studying various approaches to implementing a game loop and I have found this article. In the article the author implements a loop which, if the processing falls behind in time, skips frame renderings and just updates the game in a loop (the last variant called "Constant Game Speed independent of Variable FPS"). I do not understand why it is acceptable to call update_game() in a loop without making sure the update function is called at a particular interval. I do not see any value in doing this. I would think that in my game I want to be sure the game is updated periodically with a known period. So maybe it is worthwhile to have two threads, one would call update periodically, and the other one would redraw the game, also periodically? Would this be a good and practical approach? Of course I would need to synchronise the threads.

    Read the article

  • Use Adwaita Dark for Libre Office and possibly other apps too?

    - by Relik
    I would like to know how to force some apps to use the Adwaita dark theme. I noticed by default that the movie player and photo viewer apps in gnome shell use Adwaita Dark while most other apps use the Light variant. I would really like to set Libre Office to use Dark instead of Light. How would I go about doing this? I found the site below, but the instructions seem a bit outdated and they do not work for GTK3. http://urukrama.wordpress.com/2008/07/13/setting-a-custom-gtk-theme-for-specific-applications/

    Read the article

  • Font displays differently in Firefox vs. Chrome

    - by Goro
    It seems that my menu bar is displayed with a different font stretch in Firefox than it is in Chrome. See the following: Here is the CSS applied to this element: font-variant: small-caps; font-size:13px; letter-spacing: 0px; font-family: Arial; font-stretch: normal; text-decoration: none; As far as I can tell everything regarding that font is exactly the same, yet they still display differently (see pic). Any ideas? Thanks,

    Read the article

  • File system layout for multiple build targets

    - by Yttrill
    I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc. Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library. On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk. My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist. The question is: how to manage the layout of all the other stuff?

    Read the article

  • Assembly as a First Programming Language?

    - by Anto
    How good of an idea do you think it would be to teach people Assembly (some variant) as a first programming language? It would take a lot more effort than learning for instance Java or Python, but one would have good understanding of the machine more or less from "programming day one" (compared to many higher level languages, at least). What do you think? Is it a realistic idea, at least to those who are ready to make the extra effort? Advantages and disadvantages? Note: I'm no teacher, just curious

    Read the article

  • Bluetooth broadcasting realtime - is it possible

    - by user69961
    Is it possible to broadcast data via bluetooth to one ore more connected devices? I mean that each phone will be master and slave at the same time and each phone will broadcast data that should be received by all other phones. Or is the only possibility to use a "client-server"-like topology; one phone acts as a server and listens to all clients and then sends data from each client to the rest of clients in the network? Which variant should be more effective? If broadcasting is possible then the same implementation can be used for all devices and if one device will die communication between rest of network can continue. And also will there be enough to send one message per device - not message from each device to server and then back to all devices. Am I right?

    Read the article

  • Logout or shutdown shows terminal

    - by N.N.
    When I logout or shutdown the terminal (the one I reach with Ctrl Alt + F7) is sometimes shown before the login screen appears or before the computer is shut off. Is there a way to stop this behavior? More explicitly. If I logout the terminal is sometimes shown for a second before the login screen appears. Also, sometimes when I shutdown (from within Unity or Gnome) the terminal is shown, sometimes for the whole shutdown process or just for a second or two. I've had this problem throughout 10.04, 10.10 and 11.04 and I've always used the standard Ubuntu variant. I've also noticed this happening on a fresh install of Natty on a friends netbook, so it's not local to my computers.

    Read the article

  • Mouse stopped responding after installing Ubuntu Server

    - by Danijar
    It was 2 operating systems on my machine, Windows 7 and Ubuntu Desktop 12.04, both 64bit version. All works well, firstly I installed Win, then the Desktop version of Ubuntu, lasts's GRUB rewrites windows's boot and of course added it as a variant to load. Yesterday I wish to play with Ubuntu Server 12.04, and install it as a third operating system. During the installation Server's GRUB seems to rewrite that which installed earlier (background color became black, was purple). Firstly, newly installed GRUB generates me overmuch load options for Ubuntu Desktop, by two for each kernel version (normal and recovery modes). OK, it's bearable, but most thing, that the mouse on this machine is totally stops responding, regardless which operating system I choose to load. Hope somebody help me. Thanks in advance!

    Read the article

  • Creating a Lazy Sequence of Directory Descendants in C#

    My dear friend Craig Andera posted an implementation of a function that descends into a directory in a "lazy" manner, i.e. you get the first descendant back right away and not after all descendants have been calculated. His implementation was in Clojure, a Lisp variant that runs on the Java VM: (import [java.io File])(defn dir-descendants [dir]  (let [children (.listFiles (File. dir))]    (lazy-cat      (map (memfn getPath) (filter (memfn isFile) children))...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Does facebook store multiple password hashes for each user?

    - by loxxy
    I noticed that Facebook allows multiple variants of my own password : My password as it is. My password with first letter capitalized. My password with all letters capitalized. It is commonly known that passwords are stored as hashes. So my question is, does facebook store multiple hashes for each user? Since the hash of each variant should be completely different... Or am I missing something, here? And there may be more combinations, besides the one I observed as well. This is obviously done to provide a better user experience & they probably have a statistical explanation of people repeating these mistakes. But I could not help but wonder, is it worth to increase so many lookups (in their database) just to help the user type a wrong password? On top of this, they warn about the caps lock (even though they don't seem to care) :

    Read the article

  • "Adding Esperanto circumflexes (supersigno)" option has no effect

    - by Gardel
    I'm used to use the keyboard layout option to type the accentuated characters in Esperanto. This option is in System Settings Keyboard Layout Options Adding Esperanto circumflexes (supersigno) To the corresponding key in a Qwerty keyboard. It worked great but since I upgraded to Ubuntu 12.04, this option has no effect. I'm using Ubuntu in French with the keyboard layout French (variant) and Gnome-shell. I tested on another computer with Unity and there is the same issue. Is it a known issue? I did not find anything about it…

    Read the article

  • OO Design - polymorphism - how to design for handing streams of different file types

    - by Kache4
    I've little experience with advanced OO practices, and I want to design this properly as an exercise. I'm thinking of implementing the following, and I'm asking if I'm going about this the right way. I have a class PImage that holds the raw data and some information I need for an image file. Its header is currently something like this: #include <boost/filesytem.hpp> #include <vector> namespace fs = boost::filesystem; class PImage { public: PImage(const fs::path& path, const unsigned char* buffer, int bufferLen); const vector<char> data() const { return data_; } const char* rawData() const { return &data_[0]; } /*** other assorted accessors ***/ private: fs::path path_; int width_; int height_; int filesize_; vector<char> data_; } I want to fill the width_ and height_ by looking through the file's header. The trivial/inelegant solution would be to have a lot of messy control flow that identifies the type of image file (.gif, .jpg, .png, etc) and then parse the header accordingly. Instead of using vector<char> data_, I was thinking of having PImage use a class, RawImageStream data_ that inherits from vector<char>. Each type of file I plan to support would then inherit from RawImageStream, e.g. RawGifStream, RawPngStream. Each RawXYZStream would encapsulate the respective header-parsing functions, and PImage would only have to do something like height_ = data_.getHeight();. Am I thinking this through correctly? How would I create the proper RawImageStream subclass for data_ to be in the PImage ctor? Is this where I could use an object factory? Anything I'm forgetting?

    Read the article

  • Lucene: Question of score caculation with PrefixQuery

    - by Keven
    Hi, I meet some problem with the score caculation with a PrefixQuery. To change score of each document, when add document into index, I have used setBoost to change the boost of the document. Then I create PrefixQuery to search, but the result have not been changed according to the boost. It seems setBoost totally doesn't work for a PrefixQuery. Please check my code below: @Test public void testNormsDocBoost() throws Exception { Directory dir = new RAMDirectory(); IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_CURRENT), true, IndexWriter.MaxFieldLength.LIMITED); Document doc1 = new Document(); Field f1 = new Field("contents", "common1", Field.Store.YES, Field.Index.ANALYZED); doc1.add(f1); doc1.setBoost(100); writer.addDocument(doc1); Document doc2 = new Document(); Field f2 = new Field("contents", "common2", Field.Store.YES, Field.Index.ANALYZED); doc2.add(f2); doc2.setBoost(200); writer.addDocument(doc2); Document doc3 = new Document(); Field f3 = new Field("contents", "common3", Field.Store.YES, Field.Index.ANALYZED); doc3.add(f3); doc3.setBoost(300); writer.addDocument(doc3); writer.close(); IndexReader reader = IndexReader.open(dir); IndexSearcher searcher = new IndexSearcher(reader); TopDocs docs = searcher.search(new PrefixQuery(new Term("contents", "common")), 10); for (ScoreDoc doc : docs.scoreDocs) { System.out.println("docid : " + doc.doc + " score : " + doc.score + " " + searcher.doc(doc.doc).get("contents")); } } The output is : docid : 0 score : 1.0 common1 docid : 1 score : 1.0 common2 docid : 2 score : 1.0 common3

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Coupling between controller and view

    - by cheez
    The litmus test for me for a good MVC implementation is how easy it is to swap out the view. I've always done this really badly due to being lazy but now I want to do it right. This is in C++ but it should apply equally to non-desktop applications, if I am to believe the hype. Here is one example: the application controller has to check some URL for existence in the background. It may connect to the "URL available" event (using Boost Signals) as follows: BackgroundUrlCheckerThread(Controller & controller) { // ... signalUrlAvailable.connect( boost::bind(&Controller::urlAvailable,&controller,_1)) } So what does Controller::urlAvailable look like? Here is one possibility: void Controller::urlAvailable(Url url) { if(!view->askUser("URL available, wanna download it?")) return; else // Download the url in a new thread, repeat } This, to me, seems like a gross coupling of the view and the controller. Such a coupling makes it impossible to implement the view when using the web (coroutines aside.) Another possibility: void Controller::urlAvailable(Url url) { urlAvailableSignal(url); // Now, any view interested can do what it wants } I'm partial to the latter but it appears that if I do this there will be: 40 billion such signals. The application controller can get huge for a non-trivial application A very real possibility that a given view accidentally ignores some signals (APIs can inform you at link-time, but signals/slots are run-time) Thanks in advance.

    Read the article

  • Manipulate score/rank on query results from NHibernate.Search

    - by Fernando Figueiredo
    I've been working with NHibernate, NHibernate.Search and Lucene.Net to improve the search engine used on the website I develop. Basically, I use it to search contents of corporations specification documents. This is not to be confused with Lucene's notion of documents: in my case, a specification document (which I'll hereafter call a "specdoc") can contain many pages, and the content of these pages are the ones that are actually indexed (thus, the pages themselves are the ones that fall into Lucene's concept of documents). So, the pages belong to a specdoc, that in turn belong to a corporation (so, a corporation can have many specdocs). I'm using NHibernate.Search "IndexEmbedded" and "ContainedIn" attributes to associate the pages with their specdoc and the specdocs to their corporations, so I can query for terms in specdoc pages and have Lucene/NH.Search return either the pages themselves, the specdocs, or the corporations that match the query on the pages. I can query this way and get ranked results, thus presenting results (that is, corporations, specdocs or pages) by relevance, which is great. But now I need something more. Specifically in the case where I query terms and have NH.Search return the corporations that match, I need to manually/artificially tune the score of some of the results, because there are corporations that I want to show up on the top of the result set - think of "sponsored results". I'm thinking of doing it on my application, maybe creating an entity/database table that contain an association to the corporation entity, and a score boost value. But I don't know how to feed this to Lucene and have it boost the results accordingly at search time. Initially I thought about deriving a Similarity class to do this, but it doesn't look like Similarity can be used to modify result sets at search time. As per this page, it looks like what I need is to mess around with weight or scoring. But the docs are a little superficial in that there are no examples on how to implement a custom scoring, let alone integrate it with NH.Search. So, does anyone know how to do this, or point me to some documentation or working example on how to do something similar? Thanks!

    Read the article

  • Redirect C++ std::clog to syslog on Unix

    - by kriss
    I work on Unix on a C++ program that send messages to syslog. The current code uses the syslog system call that works like printf. Now I would prefer to use a stream for that purpose instead, typically the built-in std::clog. But clog merely redirect output to stderr, not to syslog and that is useless for me as I also use stderr and stdout for other purposes. I've seen in another answer that it's quite easy to redirect it to a file using rdbuf() but I see no way to apply that method to call syslog as openlog does not return a file handler I could use to tie a stream on it. Is there another method to do that ? (looks pretty basic for unix programming) ? Edit: I'm looking for a solution that does not use external library. What @Chris is proposing could be a good start but is still a bit vague to become the accepted answer. Edit: using Boost.IOStreams is OK as my project already use Boost anyway. Linking with external library is possible but is also a concern as it's GPL code. Dependencies are also a burden as they may conflict with other components, not be available on my Linux distribution, introduce third-party bugs, etc. If this is the only solution I may consider completely avoiding streams... (a pity).

    Read the article

  • How to install PySide v0.3.1 on Mac OS X?

    - by ivo
    I'm trying to install PySide v0.3.1 in Mac OS X, for Qt development in python. As a pre-requisite, I have installed CMake and the Qt SDK. I have gone through the documentation and come up with the following installation script: export PYSIDE_BASE_DIR="<my_dir>" export APIEXTRACTOR_DIR="$PYSIDE_BASE_DIR/apiextractor-0.5.1" export GENERATORRUNNER_DIR="$PYSIDE_BASE_DIR/generatorrunner-0.4.2" export SHIBOKEN_DIR="$PYSIDE_BASE_DIR/shiboken-0.3.1" export PYSIDE_DIR="$PYSIDE_BASE_DIR/pyside-qt4.6+0.3.1" export PYSIDE_TOOLS_DIR="$PYSIDE_BASE_DIR/pyside-tools-0.1.3" pushd . cd $APIEXTRACTOR_DIR cmake . cd $GENERATORRUNNER_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR . cd $SHIBOKEN_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR -DGeneratorRunner_DIR=$GENERATORRUNNER_DIR . cd $PYSIDE_DIR cmake -DShiboken_DIR=$SHIBOKEN_DIR/libshiboken -DGENERATOR=$GENERATORRUNNER_DIR . cd $PYSIDE_TOOLS_DIR cmake . popd Now, I don't know if this installation script is ok, but apparently everything works fine. Each component (apiextractor, generatorrunner, shiboken, pyside-qt and pyside-tools) gets compiled into its own directory. The problem is that I don't quite understand how PySide gets into the system's python environment. In fact, when I start a python shell, I cannot import PySide: >>> import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide Note: I am aware of the Installing PySide - OSX question, but that question is not relevant anymore, because it is about a specific a dependency on the Boost libraries, but with version 0.3.0 PySide moved from a Boost based source code to a CPython one.

    Read the article

  • C# style Action<T>, Func<T,T>, etc in C++0x

    - by Austin Hyde
    C# has generic function types such as Action<T> or Func<T,U,V,...> With the advent of C++0x and the ability to have template typedef's and variadic template parameters, it seems this should be possible. The obvious solution to me would be this: template <typename T> using Action<T> = void (*)(T); however, this does not accommodate for functors or C++0x lambdas, and beyond that, does not compile with the error "expected unqualified-id before 'using'" My next attempt was to perhaps use boost::function: template <typename T> using Action<T> = boost::function<void (T)>; This doesn't compile either, for the same reason. My only other idea would be STL style template arguments: template <typename T, typename Action> void foo(T value, Action f) { f(value); } But this doesn't provide a strongly typed solution, and is only relevant inside the templated function. Now, I will be the first to admit that I am not the C++ wiz I prefer to think I am, so it's very possible there is an obvious solution I'm not seeing. Is it possible to have C# style generic function types in C++?

    Read the article

  • SFINAE failing with enum template parameter

    - by zeroes00
    Can someone explain the following behaviour (I'm using Visual Studio 2010). header: #pragma once #include <boost\utility\enable_if.hpp> using boost::enable_if_c; enum WeekDay {MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY}; template<WeekDay DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<WeekDay DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} source: bool b = goToWork<MONDAY>(); compiler this gives error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY!=6,bool>::type goToWork(void)' and error C2770: invalid explicit template argument(s) for 'enable_if_c<DAY==6,bool>::type goToWork(void)' But if I change the function template parameter from the enum type WeekDay to int, it compiles fine: template<int DAY> typename enable_if_c< DAY==SUNDAY, bool >::type goToWork() {return false;} template<int DAY> typename enable_if_c< DAY!=SUNDAY, bool >::type goToWork() {return true;} Also the normal function template specialization works fine, no surprises there: template<WeekDay DAY> bool goToWork() {return true;} template<> bool goToWork<SUNDAY>() {return false;} To make things even weirder, if I change the source file to use any other WeekDay than MONDAY or TUESDAY, i.e. bool b = goToWork<THURSDAY>(); the error changes to this: error C2440: 'specialization' : cannot convert from 'int' to 'const WeekDay' Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • Why does this Object wonk out & get deleted ?

    - by brainydexter
    Stepping through the debugger, the BBox object is okay at the entry of the function, but as soon as it enters the function, the vfptr object points to 0xccccc. I don't get it. What is causing this ? Why is there a virtual table reference in there when the object is not derived from other class. (Though, it resides in GameObject from which my Player class inherits and I retrieve the BBox from within player. But, why does the BBox have the reference ? Shouldn't it be player who should be maintained in that reference ?) For 1; some code for reference: A. I retrieve the bounding box from player. This returns a bounding box as expected. I then send its address to GetGridCells. const BoundingBox& l_Bbox = l_pPlayer-GetBoundingBox(); boost::unordered_set < Cell*, CellPHash & l_GridCells = GetGridCells ( &l_Bbox ); B. This is where a_pBoundingBox goes crazy and gets that garbage value. boost::unordered_set< Cell*, CellPHash CollisionMgr::GetGridCells(const BoundingBox *a_pBoundingBox) { I think the following code is also pertinent, so I'm sticking this in here anyways: const BoundingBox& Player::GetBoundingBox(void) { return BoundingBox( &GetBoundingSphere() ); } const BoundingSphere& Player::GetBoundingSphere(void) { BoundingSphere& l_BSphere = m_pGeomMesh-m_BoundingSphere; l_BSphere.m_Center = GetPosition(); return l_BSphere; } // BoundingBox Constructor BoundingBox(const BoundingSphere* a_pBoundingSphere); Can anyone please give me some idea as to why this is happening? Also, if you want me to post more code, please do let me know. Thanks!

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >