Search Results

Search found 4105 results on 165 pages for 'intel itanium'.

Page 102/165 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • C precision of double: compiler dependent?

    - by yCalleecharan
    Hi,on my 32-bit machine (with an Intel T7700 duo core), I have 15 precision digits for both double and long double types for the C language. I compared the parameters LDBL_DIG for long doubles and DBL_DIG for doubles and they are both 15. I got these answers using MVS2008. I was wondering if these results can be compiler dependent or do they just depend on my processor? Thanks a lot...

    Read the article

  • Is there any boost-independent version of boost/tr1 shared_ptr

    - by Artyom
    I'm looking for independent implementation of boost/tr1 shared_ptr, weak_ptr and enable_shared_from_this. I need: Boost independent very small implementation of these features. I need support of only modern compilers like GCC-4.x, MSVC-2008, Intel not things like MSVC6 or gcc-3.3 I need it to be licensed under non-copyleft LGPL compatible license like Boost/Mit/3-clause BSD. So I can include it in my library. Note - it is quite hard to extract shared_ptr from boost, at least BCP gives about 324 files...

    Read the article

  • CUDA SDK compilation error

    - by ZeroDivide
    I am in the process of setting up a CUDA workstation. Platform specs: Intel Core 2 Duo Nvidia GTX 280 Fedora 10 GCC version 4.3.2 I have installed the developer driver, toolkit, and the SDK. When I try to compile the SDK example code I get the following errors: make[1]: * [obj/i386/release/cutil.cpp.o] Error 1 make: * [lib/libcutil.so] Error 2 I think this means that I am missing a library file but I'm not sure.

    Read the article

  • Advanced search page for wordpress

    - by Mighty Jack
    I have a wordpress site thats related to laptops niche. So I have multiple categories like Display(10inch, 13inch, 15inch...), Processor(AMD, Intel), HDD(120GB, 320GB, 500GB...) etc. The usual search is not good. I want to create an advanced search page where user can select from options(drop downs/checkboxes) in these different cats and the corresponding search results are displayed. Any directions about this will be great help (plugins/themes/hacks).

    Read the article

  • "STI", in the protected mode,CPU will restart.

    - by user299668
    INTEL X86 Platform. My programme run start at 2M absolute address in protected mode,everything seems ok, but when i enable interrupt with "sti", the CPU will restart. Why? is there any necessary initialization before "enbale interrupt"? i have setup the idtptr, but it seems no work.

    Read the article

  • Numerical precision of double type in Visual C++ 2008 Express debugger

    - by damik
    I'm using Visual C++ 2008 Express Edition and when i debug code: double x = 0.2; I see in debugging tooltip on x 0.20000000000000001 but: typedef numeric_limits< double > double_limit; int a = double_limit::digits10 gives me: a = 15 Why results in debugger are longer than maybe ? What is this strange precision based on ? My CPU is Intel Core 2 Duo T7100

    Read the article

  • Setting Java 1.6 as the default on Mac OS X 10.5.8

    - by Eyvind
    How can I set Java 1.6 to be the default for my MacBook Pro Intel Core 2 Duo with OS X 10.5.8? I have installed the latest software update, and dragged the Java SE 6 64-bit choice to the top in the "Java Preferences" application (and even rebooted), but still, on the command line, java -version responds with: java version "1.5.0_24" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_24-b02-357-9M3165) Java HotSpot(TM) Client VM (build 1.5.0_24-149, mixed mode, sharing) Any ideas?

    Read the article

  • How do I get rid of LD_LIBRARY_PATH at run-time?

    - by Kjir
    I am building a C++ application that uses Intel's IPP library. This library is installed by default in /opt and requires you to set LD_LIBRARY_PATH both for compiling and for running your software (if you choose the shared library linking, which I did). I already modified my configure.ac/Makefile.am so that I do not need to set that variable when compiling, but I still can't find the shared library at run-time; how do I do that? I'm compiling with the -Wl, -R/path/to/lib flag using g++

    Read the article

  • is there anyway to know if your supposedly fully dedicated server is really a virtually resource-sha

    - by siran
    Hi, sometimes I feel my server not responding as smoothly as I would expect (i have a Intel(R) Xeon(TM) CPU 2.80GHz Quad Core), given that for example, the 'top' commands reports a low load < 0.5, CPU are almost completely idle ... I maybe have internet connectivity issues, so I don't really know if it's me or if it's the server itself. Is there anykind of benchmarking script (or something analogous) I could run and see the actual performance of the server ?

    Read the article

  • How to make the process quicker?

    - by dewacorp.alliances
    We build a system that handling the billing analyse and what it does it just have a raw bill from vendor and process to "common" table and this is using SQL Server 2005 Integration Services. At the moment, the batch has 300,000 rows and it took about 6 minutes to process. Now, the current spec are: 2 CPUs Intel Xeon E5345 @ 2.33 GHz RAM 4GB Is there anyway I can speed up this process?

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • How do I preserve installed applications when migrating Ubuntu to another platform?

    - by michaeljoseph
    I'm looking at maybe moving from an older AMD64 to a new Intel dual-core which is 32 bit. Installation isn't a problem but can I transfer all the installed apps? I haven't been able to find anything so far on Google except where the migration is to a similar platform and file-system. I won't change the filesystem but the platform will be different. Is there something on the lines of the "World" file in Gentoo?

    Read the article

  • What is the point of padding?

    - by ktm5124
    In particular, I'm reading into the Mach-O binary file format for Intel 32 on OS X. After the FAT header there is a whole bunch of padding before the offset of the first archive. What is the point of all this padding? To be more specific, there is upwards of 4000 bytes of padding between the FAT header and the first archive (in particular, the mach_header). Why include all these extra bytes?! Is OS X fond of adding 4 MB to all their universal binaries?

    Read the article

  • How to develop an iPhone application

    - by Ankur
    I want to develop an ebook application for iPhone. I'm new to iPhone developement, so don't have much idea about how to proceed. What I know is, I need the following:- iPhone SDK Intel Mac running mac OS X Xcode (?) Please indicate is that correct and how I can proceed to build an ebook application. Thanks.

    Read the article

  • How to tell what optimizations bjam is using to build boost

    - by Steve
    I'm building the boost libraries with bjam for both the intel compiler and vs2008, and I can't tell what optimizations are being passed to the compiler from bjam. For one of the compiler's gcc, I can see some optimizations in one of the bjam files, but I can't find the optimization flags for the compilers I care about. So, my questions are - Does anyone know where the default optimization flags are located? If they're declared within bjam, does anyone know how I can override them?

    Read the article

  • C++ performance, for versus while

    - by aaa
    hello. In general (or from your experience), is there difference in performance between for and while loops? What if they are doubly/triply nested? Is vectorization (SSE) affected by loop variant in g++ or Intel compilers? Thank you

    Read the article

  • Dmbedded couchDB

    - by Chang
    CouchDB is great, I like its replication functionality, but it's a bit larger and slower when used in desktop application. As I tested in intel duo core cpu, 12 seconds to load 10000 docs 10 seconds to insert 10000 doc, but need 20 seconds to update view, so total is 30 seconds Is there any No SQL implementation which has the same replication functionality, but the size is very small, and the speed is quite good( 1 second to load 10000 docs). Thanks

    Read the article

  • A problem in my windows boot menu

    - by user210332
    Hi, One i had kept a supervisor password to my windows boot screen, but now i forgot that password, Now i am unable to access the boot menu since its asking the password, all menu options are disabled. Is it possible to remove that password and can i get the boot menu default settings back? Processor: Intel Pentium dual core (2) OS : XP Thanks in Advance,

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • inspect C++ template instantiation

    - by aaa
    hello. Is there some utility which would allow me to inspect template instantiation? my compiler is g++ or Intel. Specific points I would like: Step by step instantiation. Instantiation backtrace (can hack this by crashing compiler. Better method?) Inspection of template parameters. Thanks

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >