Search Results

Search found 4065 results on 163 pages for 'intel mkl'.

Page 101/163 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • How to make the process quicker?

    - by dewacorp.alliances
    We build a system that handling the billing analyse and what it does it just have a raw bill from vendor and process to "common" table and this is using SQL Server 2005 Integration Services. At the moment, the batch has 300,000 rows and it took about 6 minutes to process. Now, the current spec are: 2 CPUs Intel Xeon E5345 @ 2.33 GHz RAM 4GB Is there anyway I can speed up this process?

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • What is the point of padding?

    - by ktm5124
    In particular, I'm reading into the Mach-O binary file format for Intel 32 on OS X. After the FAT header there is a whole bunch of padding before the offset of the first archive. What is the point of all this padding? To be more specific, there is upwards of 4000 bytes of padding between the FAT header and the first archive (in particular, the mach_header). Why include all these extra bytes?! Is OS X fond of adding 4 MB to all their universal binaries?

    Read the article

  • How do I preserve installed applications when migrating Ubuntu to another platform?

    - by michaeljoseph
    I'm looking at maybe moving from an older AMD64 to a new Intel dual-core which is 32 bit. Installation isn't a problem but can I transfer all the installed apps? I haven't been able to find anything so far on Google except where the migration is to a similar platform and file-system. I won't change the filesystem but the platform will be different. Is there something on the lines of the "World" file in Gentoo?

    Read the article

  • How to develop an iPhone application

    - by Ankur
    I want to develop an ebook application for iPhone. I'm new to iPhone developement, so don't have much idea about how to proceed. What I know is, I need the following:- iPhone SDK Intel Mac running mac OS X Xcode (?) Please indicate is that correct and how I can proceed to build an ebook application. Thanks.

    Read the article

  • How to tell what optimizations bjam is using to build boost

    - by Steve
    I'm building the boost libraries with bjam for both the intel compiler and vs2008, and I can't tell what optimizations are being passed to the compiler from bjam. For one of the compiler's gcc, I can see some optimizations in one of the bjam files, but I can't find the optimization flags for the compilers I care about. So, my questions are - Does anyone know where the default optimization flags are located? If they're declared within bjam, does anyone know how I can override them?

    Read the article

  • C++ performance, for versus while

    - by aaa
    hello. In general (or from your experience), is there difference in performance between for and while loops? What if they are doubly/triply nested? Is vectorization (SSE) affected by loop variant in g++ or Intel compilers? Thank you

    Read the article

  • Dmbedded couchDB

    - by Chang
    CouchDB is great, I like its replication functionality, but it's a bit larger and slower when used in desktop application. As I tested in intel duo core cpu, 12 seconds to load 10000 docs 10 seconds to insert 10000 doc, but need 20 seconds to update view, so total is 30 seconds Is there any No SQL implementation which has the same replication functionality, but the size is very small, and the speed is quite good( 1 second to load 10000 docs). Thanks

    Read the article

  • A problem in my windows boot menu

    - by user210332
    Hi, One i had kept a supervisor password to my windows boot screen, but now i forgot that password, Now i am unable to access the boot menu since its asking the password, all menu options are disabled. Is it possible to remove that password and can i get the boot menu default settings back? Processor: Intel Pentium dual core (2) OS : XP Thanks in Advance,

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • inspect C++ template instantiation

    - by aaa
    hello. Is there some utility which would allow me to inspect template instantiation? my compiler is g++ or Intel. Specific points I would like: Step by step instantiation. Instantiation backtrace (can hack this by crashing compiler. Better method?) Inspection of template parameters. Thanks

    Read the article

  • WPF on a iX104 Tablet PC - efficient enough?

    - by Casimodo72
    Hi, I'm considering to use WPF for an application running on an iX104 (unfortunately I don't have one of those yet). Do you guys think that this combination would be efficient enough? Or is WPF just to slow for such a machine, so better stick with WinForms? Relevant specs of the iX104: Intel Pentium U2500 CoreDuo 2x 1.2 GHz 1 GB RAM, optional 2 GB 10.4" TFT, XGA 1024x768 Regards, Kasimier

    Read the article

  • how to build and run core apps of ics like settings, camera etc on windows

    - by user1495186
    I have downloaded ics 4.0.3 source code, want to modify native settings, what i have to do is 1) add custom modifications to the settings 2) recompile native settings with added modifications 3) build the source code 4) generate a customized build to work on all android devices. How can I achieve the above thing? Every suggestion is appreciated. Thanks in advance. FYI: Using win7,4gb ram, intel i5 processor. Installed cygwin,git.

    Read the article

  • Is there a place where I see the popular opinion of developers?

    - by User1
    I really want to know popular opinion on controversial programming topics. Questions like: Why do some people prefer vi over emacs (or vice versa)? Is Java faster than C++? Is Intel faster than AMD? It seems SO discourages such conversions because of potential flamewars. So where do people go to discuss such matters? I'm especially interested in venues where people can "up vote" good comments and good questions.

    Read the article

  • Determining the word width in C

    - by das_weezul
    Hi! I'm learning C right now and so I'm fiddling about with pointers. Is there a way to determine the word width of the CPU in C because I'm writing a small program which prints it's own stack (Because I'm curious how it is structured), so that information would come in handy. Right now I'm using an int pointer, as an integer is 4 Bytes wide and I'm using a 32-bit Intel Atom CPU. Thanks in advance, C gurus ;o)

    Read the article

  • Python multithreading, How is it using multiple Cores?

    - by Sabirul Mostofa
    I am running a multithreaded application(Python2.7.3) in a Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz. I thought it would be using only one core but using the "top" command I see that the python processes are constantly changing the core no. Enabling "SHOW THREADS" in the top command shows diffrent thread processes working on different cores. Can anyone please explain this? It is bothering me as I know from theory that multithreading is executed on a single core.

    Read the article

  • What version of Visual Studio is this python compiled with?

    - by leon
    I am trying to find out the version of Visual Study that is used to compile the python on my computer It says Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 What I do not understand is that MSC V.1500. Does it mean it is compiled with 2005? I cannot find this information on `python.org' neither. Any help is appreciated!

    Read the article

  • What binary architectures should be cross-compiled when building Mac OS X packages?

    - by Alex Leach
    Currently, Apple's native binaries and libraries are distributed as fat files, with support for both i386 and x86_64 architectures. The SDK (Xcode 4.4 w/ command line tools) doesn't support cross-compiling powerpc binaries any more, so they can be safely ignored I think, but there doesn't seem to be any specific guidelines or recommendations about which Intel architectures to support. So, when compiling code for distribution on OS X, do people still cross-compile for the i386 architecture? Or are x86_64 binaries the only architecture worth bothering with nowadays?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >