Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 81/223 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • 32 bit dll importing in 64 bit .Net application

    - by scatterbraiin
    hello i'm having a problem, i try to solve it since yesterday but no luck. i have 32 bit delphi dll which i want to import in .NET Win Application. this application has to be built on ANY CPU mode. of course, there's BadImageFormatException coming, which means that in x64 application can't be loaded x86 dll.. i googled around and find solution, it said i have to do wrapper, but it wasn't clear for me. can anybody tell how to solve this problem, is there any possible way i can import 32bit Delphi dll in program builted Any CPU or x64 mode(maybe another solution).

    Read the article

  • Performance of DrawingVisual vs Canvas.OnRender for lots of constantly changing shapes

    - by romkyns
    I'm working on a game-like app which has up to a thousand shapes (ellipses and lines) that constantly change at 60fps. Having read an excellent article on rendering many moving shapes, I implemented this using a custom Canvas descendant that overrides OnRender to do the drawing via a DrawingContext. The performance is quite reasonable, although the CPU usage stays high. However, the article suggests that the most efficient approach for constantly moving shapes is to use lots of DrawingVisual instances instead of OnRender. Unfortunately though it doesn't explain why that should be faster for this scenario. Changing the implementation in this way is not a small effort, so I'd like to understand the reasons and whether they are applicable to me before deciding to make the switch. Why could the DrawingVisual approach result in lower CPU usage than the OnRender approach in this scenario?

    Read the article

  • How to set WCF threads to schedual differently

    - by Gilad
    Hi, I'm running a winservice that has 2 main objectives. Execute/Handle exposed webmethods. Run Inner processes that consume allot of CPU. The problem is that when I execute many inner processes |(as tasks) that are queued to the threadpool or taskpool, the execution of the webmethods takes much more time as WCF also queues its executions to the same threadpool. This even happens when setting the inner processes task priority to lowest and setting the webmethods thread priority to heights. I hoped that Framework 4.0 would improve this, and they have, but still it takes quite allot of time for the system to handle the WCF queued tasks if the CPU is handling other inner tasks. Is it possible to change the Threadpool that WCF uses to a different one? Is it possible to manually change the task queue (global task queue, local task queue). Is it possible to manually handle 2 task queues that behave differently ? Any help in the subject would be appropriated. Gilad.

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • learning the Lower levels of computing

    - by Ben
    I am a software developer with four years experience in .Net development, I always like to keep up to date with the latest technologies (.net related normally) being released and love learning them. I didn't however go to university and learnt all I know through helpful colleagues, .Net courses, the internet and good old books. I feel that I am a good developer, but without learning the lower levels of a computer as you would in the first year of a computer related Uni course, I get lost when talking to people about a lot of more technical lower level computing. Is there a book(s) that anyone could recommend, that would cover the lower levels of what is going on when I click "Run" in Visual Studio? I feel out of my depth when my boss says to me "Thats running in the CPU cache" or "you're limited by disk reads there", and would like to feel more confident when talking about how the hardware talks to each other (CPU to RAM etc). Apologise if thats a vague question, or has been asked before (i did check and couldn't find anything on here that answers my question).

    Read the article

  • How to utilize my computation resources.

    - by carter-boater
    Hi all, I wrote a program to solve a complicated problem. This program is just a c# console application and doesn't do console.write until the computation part is finished, so output won't affect the performance. The program is like this: static void Main(string[] args) { Thread WorkerThread = new Thread(new ThreadStart(Run), StackSize); WorkerThread.Priority = ThreadPriority.Highest; WorkerThread.Start(); Console.WriteLine("Worker thread is runing..."); WorkerThread.Join(); } Now it takes 3 minute to run, when I open my task manager, I see it only take 12% of the cpu time. I actually have a i7 intel cpu with 6G three channel DDR3 memory. I am wondering how I can improve the utilization of my hardware. Thanks a lot

    Read the article

  • What is the bit size of long on 64-bit Windows?

    - by acidzombie24
    Not to long ago someone told me that long are not 64 bits on 64 bit machines and i should always use int. This did not make sense to me. I seen docs (such as the one on apples official site) say that long are indeed 64 bits when compiling for a 64bit CPU. I looked up what it was on windows and found Windows: long and int remain 32-bit in length, and special new data types are defined for 64-bit integers. from http://www.intel.com/cd/ids/developer/asmo-na/eng/197664.htm?page=2 What should i use? should i define something like uw, sw ((un)signed width) as a long if not on windows. Otherwise do a check on the target CPU bitsize?

    Read the article

  • How to ignore/prevent javadoc folder from validation during Eclipse Build?

    - by h2g2java
    In my war is a huge javadoc folder. There is no point in validating it since javadocs are produced by Sun(Oracle) javadoc utility. I have forgotten how I did it the last time. I need to tell Eclipse build not to validate that particular folder. Reasons why I need it: 1. the html produced by Sun javadoc generation utility does not meet the requirement that Eclipse uses - there is a bug report in Eclipse but Eclipse responds that Sun javadoc generator non-compliance is not their fault and that Eclipse intends to stick to their strict compliance. Which results in lots of html errors listed in the problems tab. 2. the javadoc folder is a remote link and high activity on that link is using up my cpu resource, and because it is a link to a remote location, that cpu high activity is sustained for long time until it finishes scanning the whole 35MB javadocs. Thanks - need help.

    Read the article

  • Identify cause of hundreds of AJP threads in Tomcat

    - by Rich
    We have two Tomcat 6.0.20 servers fronted by Apache, with communication between the two using AJP. Tomcat in turn consumes web services on a JBoss cluster. This morning, one of the Tomcat machines was using 100% of CPU on 6 of the 8 cores on our machine. We took a heap dump using JConsole, and then tried to connect JVisualVM to get a profile to see what was taking all the CPU, but this caused Tomcat to crash. At least we had the heap dump! I have loaded the heap dump into Eclipse MAT, where I have found that we have 565 instances of java.lang.Thread. Some of these, obviously, are entirely legitimate, but the vast majority are named "ajp-6009-XXX" where XXX is a number. I know my way around Eclipse MAT pretty well, but haven't been able to find an explanation for it. If anyone has some pointers as to why Tomcat may be doing this, or some hints on finding out why using Eclipse MAT, that'd be appreciated!

    Read the article

  • OpenMP + SSE gives no speedup

    - by Sayan Ghosh
    Hi, My Professor found out this interesting experiment of 3D Linearly separable Kernel Convolution using SSE and OpenMP, and gave the task to me to benchmark the statistics on our system. The author claims a crazy 18 fold speedup from the serial approach! Might not be always, but we were expecting at least a 2-4 times speedup running this on a Dual Core Intel. http://software.intel.com/en-us/articles/16bit-3d-convolution-sse4openmp-implementation-on-penryn-cpu/#comment-41994 Alas, we could find exactly no speedup. The serial code performs always better, with or without OpenMP. I am using Linux, and observed a certain trend...when no other processes are running on the system, after a while the loadavg starts increasing, and the the %CPU utilization falls down. Another probable false positive which I ran into accidentally...I started the program, then immediately paused it. Then I ran it on background with bg, and saw a speedup of more than 2. This happens all the time! Any advice would be great. Thanks, Sayan

    Read the article

  • What's a reliable and practical way to protect software with a user license ?

    - by Frank
    I know software companies use licenses to protect their softwares, but I also know there are keygen programs to bypass them. I'm a Java developer, if I put my program online for sale, what's a reliable and practical way to protect it ? How about something like this, would it work ? <1> I use ProGuard to protect the source code. <2> Sign the executable Jar file. <3> Since my Java program only need to work on PC [I need to use JDIC in it], I wrap the final executable Jar into an .exe file which makes it harder to decompile. <4> When a user first downloads and runs my app, it checks for a Pass file on his PC. <5> If the Pass file doesn't exist, run the app in demo mode, exits in 5 minutes. <6> When demo exits a panel opens with a "Buy Now" button. This demo mode repeats forever unless step <7> happens. <7> If user clicks the "Buy Now" button, he fills out a detailed form [name, phone, email ...], presses a "Verify Info" button to save the form to a Pass file, leaving license Key # field empty in this newly generated Pass file. <8> Pressing "Verify Info" button will take him to a html form pre-filled with his info to verify what he is buying, also hidden in the form's input filed is a license Key number. He can now press a "Pay Now" button to goto Paypal to finish the process. <9> The hidden license Key # will be passed to Paypal as product Id info and emailed to me. <10> After I got the payment and Paypal email, I'll add the license Key # to a valid license Key list, and put it on my site, only I know the url. The list is updated hourly. <11> Few hours later when the user runs the app again, it can find the Pass file on his PC, but the license Key # value is empty, so it goes to the valid list url to see if its license Key # is on the list, if so, write the license Key # into the Pass file, and the next time it starts again, it will find the valid license Key # and start in purchased mode without exiting in 5 minutes. <12> If it can't find its license Key # on the list from my url, run in demo mode. <13> In order to prevent a user from copying and using another paid user's valid Pass file, the license Key # is unique to each PC [I'm trying to find how], so a valid Pass file only works on one PC. Only after a user has paid will Paypal email me the valid license Key # with his payment. <14> The Id checking goes like this : Use the CPU ID : "CPU_01-02-ABC" for example, encrypt it to the result ID : "XeR5TY67rgf", and compare it to the list on my url, if "XeR5TY67rgf" is not on my valid user list, run in demo mode. If it exists write "XeR5TY67rgf" into the Pass File license field. In order to get a unique license Key, can I use his PC's CPU Id ? Or something unique and useful [ relatively less likely to change ]. If so let's say this CPU ID is "CPU_01-02-ABC", I can encrypt it to something like "XeR5TY67rgf", and pass it to Paypal as product Id in the hidden html form field, then I'll get it from Paypal's email notification, and add it to the valid license Key # list on the url. So, even if a hacker knows it uses CPU Id, he can't write it into the Pass file field, because only encrypted Ids are valid Ids. And only my program knows how to generate the encrypted Ids. And even if another hacker knows the encrypted Id is hidden in the html form input field, as long as it's not on my url list, it's still invalid. Can anyone find any flaw in the above system ? Is it practical ? And most importantly how do I get hold of this unique ID that can represent a user's PC ? Frank

    Read the article

  • Referencing a x86 assembly in a 64bit one

    - by Jörg Battermann
    In one project we have a dependency on a legacy system and its .Net assembly which is delivered as a 'x86' and they do not provide a 'Any CPU' and/or 64bit one. Now the project itself juggles with lots of data and we hit the limitations we have with a forced x86 on the whole project due to that one assembly (if we used 64bit/any cpu it would raise a BadImageFormatException once that x86 is loaded on a 64bit machine). Now is there a (workaround)way to use a x86 assembly in a 64bit .net host app? That tiny dependency on that x86 assembly makes and keeps 99% of the project heavily limited.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Why Eventmachine Defer slower than Ruby Thread?

    - by allenwei
    I have two scripts which using mechanize to fetch google index page. I assuming Eventmachine will faster than ruby thread, but not. Eventmachine code cost "0.24s user 0.08s system 2% cpu 12.682 total" Ruby Thread code cost "0.22s user 0.08s system 5% cpu 5.167 total " Am I use eventmachine in wrong way? Who can explain it to me, thanks! 1 Eventmachine require 'rubygems' require 'mechanize' require 'eventmachine' trap("INT") {EM.stop} EM.run do num = 0 operation = proc { agent = Mechanize.new sleep 1 agent.get("http://google.com").body.to_s.size } callback = proc { |result| sleep 1 puts result num+=1 EM.stop if num == 9 } 10.times do EventMachine.defer operation, callback end end 2 Ruby Thread require 'rubygems' require 'mechanize' threads = [] 10.times do threads << Thread.new do agent = Mechanize.new sleep 1 puts agent.get("http://google.com").body.to_s.size sleep 1 end end threads.each do |aThread| aThread.join end

    Read the article

  • SQL 2008 Encryption Scan

    - by Mike K.
    We recently upgraded a database server from SQL 2005 to SQL 2008 64 bit. CPU utilization is oftentimes running at 100% on all four processors now (this never happended on the SQL 2005 server). When I run sp_lock I see a number of processes waiting on a resource called [ENCRYPTION_SCAN]. I am not using any SQL 2008 encryption features. Does anyone know why I would have tasks waiting on this resource? It appears that whenever I have four processes waiting on this resource, CPU hits 100% on all four processors.

    Read the article

  • Computationally intensive scala process using actors hangs uncooperatively

    - by Chick Markley
    I have a computationally intensive scala application that hangs. By hangs I means it is sitting in the process stack using 1% CPU but does not respond to kill -QUIT nor can it be attached via jdb attach. Runs 2-12 hours at 800-900% CPU before it gets stuck The application is using ~10 scala.actors. Until now I have had great success with kill -QUIT but I am bit stumped as to how to proceed. The actors write a fair amount to stdout using println which is redirected to a text file but has not been helpful so far diagnostically. I am just hoping there is some obvious technique when kill -QUIT fails that I am ignorant of. Or just confirmation that having multiple actors println asynchronously is a real bad idea (though I've been doing it for a long time only recently with these results) Details scala 2.8.1 & 2.8.0 mac osx 10.6.5 java version "1.6.0_22" Thanks

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • Are all the system's floating points operations the same?

    - by Jj
    We're making this web app in PHP and when working in the reports we have Excel files to compare our results to make sure our coding is doing the right operations. Now we're running into some differences due floating point arithmetics. We're doing the same divisions and multiplications and running into slightly different numbers, that add up to a notable difference. My question is if Excel is delegating it's floating point arithmetic to the CPU and PHP is also relying in the CPU for it's operations. Or does each application implements its own set of math algorithms?

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Continuously checking database from a Windows service

    - by JonF
    I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider

    Read the article

  • ffmpeg libavcodec.so missing while compile with cygwin

    - by nick
    I am building ffmpeg for android by following this tutorial now i got android folder inside the ffmpeg2.0.1 folder but there is no libavcodec-55.so file. instead of that i have lib/libavcodec.a how can i get libavcodec.so file? build_android.sh #!/bin/bash NDK=$HOME/Desktop/adt/android-ndk-r9 SYSROOT=$NDK/platforms/android-9/arch-arm/ TOOLCHAIN=$NDK/toolchains/arm-linux-androideabi-4.8/prebuilt/windows-x86_64 function build_one { ./configure \ --prefix=$PREFIX \ --enable-shared \ --disable-static \ --disable-doc \ --disable-ffmpeg \ --disable-ffplay \ --disable-ffprobe \ --disable-ffserver \ --disable-avdevice \ --disable-doc \ --disable-symver \ --cross-prefix=$TOOLCHAIN/bin/arm-linux-androideabi- \ --target-os=linux \ --arch=arm \ --enable-cross-compile \ --sysroot=$SYSROOT \ --extra-cflags="-Os -fpic $ADDI_CFLAGS" \ --extra-ldflags="$ADDI_LDFLAGS" \ $ADDITIONAL_CONFIGURE_FLAG make clean make make install } CPU=arm PREFIX=$(pwd)/android/$CPU ADDI_CFLAGS="-marm" build_one

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >