Search Results

Search found 5628 results on 226 pages for 'cpu hogging'.

Page 206/226 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • C# properties: How are they instantiated?

    - by Pedery
    Hi! This might be a pretty straightforward question, but I'm trying to understand some of the internal workings of the compilation. Very simply put, imagine an arbitrary object being instantiated. This object is then allocated on the heap. The object has a property of type PointF (which is value type), with a get and a set method. Imagine the get and the set method containing a few calculations for doing their work. How and where (stack/heap) and when is this code instantiated? This is the background for this question: I'm writing get and set methods for an object and these methods need to be accessed very frequently. The get and set code in itself is rather massive so I feared that in a worst case scenario the methods would be instantiated as an object or a value type with all internal code for every access of the property. On the other hand the code is probably instantiated when the main object is created and the CPU is simply told to jmp to the property code start. Anyway, this is what I want to have clarified.

    Read the article

  • How do I slow down a flash game?

    - by dsaccount1
    Basically the objective is to click on certain targets, which upon doing so would destroy the target and garner you points. I've written a macro to help me until the point where its impossible to even see the target more than a mere flicker, (maybe even less than that, i cant see it with my eyes). But its possible because i believe others have done so. (Maybe on slower comps?) Anyway the question is, how would it be possible to slow down the flash game? I've thought of a couple of ways that could work but i'm not sure how to implement them. 1. Slow down the cpu speed? (smth like that? how?) 2. As the game progress the time the targets appear and stay up is reduced. Maybe theres a variable controlling all of this, isit possible to modify the address of this variable? freeze it or smth? Any ideas, suggestions and especially advice would really be appreciated, thanks!

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • Regex to GENERATE thumbnails!?!?! (but that's crazy!)

    - by CryptoMonkey
    Hello everyone! So here is my situation, and the solution that I've come up with to solve the problem. I have created an application that includes TinyMCE to allow users to create HTML content for publishing. The user can include images in their markup, and drag/resize those images effecting the final Width/Height attributes in the IMG tag. This is all great, the users can include images and resize/relocate them to their desired appearance. But one big problem is that I am now sending a (possibly) much larger image to the client, only to have the browser resize the image into the requested Width/Height attributes. All that bandwidth and lost load time.... So my solution is to pre-process my users markup content, scanning all of the IMG tags and parsing out the Height/Width/Src attributes. Then set each img's SRC tag to a phpThumb request with the parsed Height/Width passed into the thumbnails URL. This will create my reduced size image (optimising bandwidth at the expense of CPU and caching). What do you think about this solution? I've seen other posts where people were using mod_rewrite to do something similar, but I want to effect the content on the page service and not manipulate the image requests as they're being received. .... Any thoughts about this design? I need some help with the fine details as my regex skills need some work, but I'm very short on time and promise to pay my technical knowledge debt soon. To make the regex's easier, I can be sure of some things. Only img tags that need this processing will have an existing width="" height="" attributes (with the double quotes, and lower cased text, but I suppose matching the text case insensitive would be better if TinyMCE changes) So a regex to match only the necessary Img tags, and maybe another three regex's to extract the src, the width, and the height? Thanks everyone.

    Read the article

  • Any workarounds for non-static member array initialization?

    - by TomiJ
    In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays? [Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.] E.g. I'd like to have something like this (but this one doesn't work): class OtherClass { private: int data; public: OtherClass(int i) : data(i) {}; // No default constructor! }; class Foo { private: OtherClass inst[3]; // Array size fixed and known ahead of time. public: Foo(...) : inst[0](0), inst[1](1), inst[2](2) {}; }; The only workaround I'm aware of is the non-array one: class Foo { private: OtherClass inst0; OtherClass inst1; OtherClass inst2; OtherClass *inst[3]; public: Foo(...) : inst0(0), inst1(1), inst2(2) { inst[0]=&inst0; inst[1]=&inst1; inst[2]=&inst2; }; }; Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.

    Read the article

  • WINDOWS: Your computer hangs. You can windows + R (run dialog) but performance is so halted taskMGR

    - by John Sullivan
    The question is, what process are available to try to recover from total system instability before pulling the plug when we can do nothing but programs or batches in the path from the run dialog (windows + r key), and performance is so dead that taskMGR / procEXP / other programs with visual guis are not usable? I am not a windows expert, but ideally someone out there has written a program that does more or less stuff like this: Immediately set (or perhaps I can set from the run prompt) its priority to extremely high, evaluate performance bottlenecks. E.g. is CPU 100%? If so identify offending program(s) or problems. Attempt / log fixes, then provide crude feedback asking the user if his performance has stabilized enough to abort, wait a few seconds, if no feedback continue, etc. etc. Eventually try to do any "system cleanup" if the program decides it cannot recover and perhaps finally provide a series of beeps to the user, or what have you, to say "OK, I give up, time to pull the plug". Ideally create a log, when able. These kinds of horrible hangs are a situation where surely trying something, anything, is better than nothing -- as long as that something is intelligent -- when the alternative is ripping out the power coord. Again, I am not a windows expert, so perhaps there is a much more elegant "hands on" approach I am not aware of.

    Read the article

  • How to optimize this simple function which translates input bits into words?

    - by psihodelia
    I have written a function which reads an input buffer of bytes and produces an output buffer of words where every word can be either 0x0081 for each ON bit of the input buffer or 0x007F for each OFF bit. The length of the input buffer is given. Both arrays have enough physical place. I also have about 2Kbyte free RAM which I can use for lookup tables or so. Now, I found that this function is my bottleneck in a real time application. It will be called very frequently. Can you please suggest a way how to optimize this function? I see one possibility could be to use only one buffer and do in-place substitution. void inline BitsToWords(int8 *pc_BufIn, int16 *pw_BufOut, int32 BufInLen) { int32 i,j,z=0; for(i=0; i<BufInLen; i++) { for(j=0; j<8; j++, z++) { pw_BufOut[z] = ( ((pc_BufIn[i] >> (7-j))&0x01) == 1? 0x0081: 0x007f ); } } } Please do not offer any compiler specific or CPU/Hardware specific optimization, because it is a multi-platform project.

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

  • Image resizing efficiency in C# and .NET 3.5

    - by Matthew Nichols
    I have written a web service to resize user uploaded images and all works correctly from a functional point of view, but it causes CPU usage to spike every time it is used. It is running on Windows Server 2008 64 bit. I have tried compiling to 32 and 64 bit and get about the same results. The heart of the service is this function: private Image CreateReducedImage(Image imgOrig, Size NewSize) { var newBM = new Bitmap(NewSize.Width, NewSize.Height); using (var newGrapics = Graphics.FromImage(newBM)) { newGrapics.CompositingQuality = CompositingQuality.HighSpeed; newGrapics.SmoothingMode = SmoothingMode.HighSpeed; newGrapics.InterpolationMode = InterpolationMode.HighQualityBicubic; newGrapics.DrawImage(imgOrig, new Rectangle(0, 0, NewSize.Width, NewSize.Height)); } return newBM; } I put a profiler on the service and it seemed to indicate the vast majority of the time is spent in the GDI+ library itself and there is not much to be gained in my code. Questions: Am I doing something glaringly inefficient in my code here? It seems to conform to the example I have seen. Are there gains to be had in using libraries other than GDI+? The benchmarks I have seen seem to indicate that GDI+ does well compare to other libraries but I didn't find enough of these to be confident. Are there gains to be had by using "unsafe code" blocks? Please let me know if I have not included enough of the code...I am happy to put as much up as requested but don't want to be obnoxious in the post.

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Python threading question (Working with a method that blocks forever)

    - by Nix
    I am trying to wrap a thread around some receiving logic in python. Basically we have an app, that will have a thread in the background polling for messages, the problem I ran into is that piece that actually pulls the messages waits forever for a message. Making it impossible to terminate... I ended up wrapping the pull in another thread, but I wanted to make sure there wasn't a better way to do it. Original code: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() #do other stuff... class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): #stop is a flag that i use to stop the thread... while(not stopped ): #can never stop because pull below blocks message = receiver.pull() print "Message" + message What I refectored to: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): pullThread = PullThread(self.receiver) pullThread.start() #stop is a flag that i use to stop the thread... while(not stopped and pullThread.last_message ==None): pass message = pullThread.last_message print "Message" + message class PullThread(Thread): last_message = None def __init__(self, receiver): Thread.__init(self, target=get_message, args=(receiver)) def get_message(self, receiver): self.last_message = None self.last_message = receiver.pull() return self.last_message I know the obvious locking issues exist, but is this the appropriate way to control a receive thread that waits forever for a message? One thing I did notice was this thing eats 100% cpu while waiting for a message... **If you need to see the stopping logic please let me know and I will post.

    Read the article

  • My HTML5 web app crashes and I have no clue how to debug

    - by Shouvik
    Hi All, I have written a word game using HTML5 canvas tag and a little bit of audio. I developed the application on the chrome web browser on a linux system. Recently during the testing phase it was tried on safari 5.0.3 on Mac and the webpage froze. Not just the canvas element, but interactive element on the page froze. I have at some times experienced this problem on google chrome when I was developing but since the console did not throw any error before this happened, I did not give it much credence. Now as per requirements I am supposed to support both chrome and safari but this dismal performance on safari has left me shocked and I cannot see what error can be thrown which might lead to such a situation. Worse yet the CPU usage on using this application peaks to 70-80percent on my 2yr old macbook running ubuntu... I can only but pity the person who uses mac to operate this app, which undoubtedly is a heavier OS. Could someone help me out with a place I can start with to find out what exactly is causing this issue. I have run profiles on this webapp on google chromes console and noticed that in the heap spanshot value increases steadily with the playing of the game, specifically (root) value which jumps up by 900 counts. Any help would be very appreciated! Thanks EDIT: I don't know if this helps, but I have noticed that even on refreshing the page after the app becomes unresponsive the page reloads and I am still not able to interact with the page elements but the tab scroll bar continues to work and I can see my application window completely. So to summaries the tab stops accepting any sort of user interaction inside the page. Edit2: Nop. It doesn't work still... The app crashes on double click on the canvas element. The console is not throwing any errors either! =/ I have noticed this problem is isolated only to safari!

    Read the article

  • Strategy for animation a lot of LED's - thread?, UIView animations? NSOperation? (iPhone)

    - by RickiG
    Hi I have to do some different views containing 72 LED lights. I built an LED Class so I can loop through the LED's and set them to different colors (Green, Red, Orange, Blue None etc.). The LED then loads the appropriate .png. This works fine, I loop over the LED's and set them. Now I know that at some time they will need to not just turn on/off change color, but will have to turn on with a small delay. Like an equalizer. I have a 5-10 views containing the 72 LED's and I would like to achieve the above with the minimum amount of memory/CPU strain. for(LED *l in self.ledArray) { [l display:Green]; } I simply loop as shown above and inside the LED is a switch case that does the correct logic. If this were actual LED's and a microController I would use sleep(100) or similar in the loop, but I would really like to avoid stuff like that for obvious reasons. I was thinking that doing a performOnThread withDelay would really be consuming, so would UIView animation changing the alpha and NSOperation would also be a lot of lifting for a small feature. Is there a both efficient and clever way to go around this? Thanks for any inspiration given:)

    Read the article

  • VB.NET Program Locks Up with Internet Explorer Opened

    - by aaronsj
    I'm using Visual Studio 2008 and developing a VB.NET application. I'm having strange lockup problems with my program, but only when Internet explorer 8 is opened. When I cover my form with another window and then uncover it, I find that it has locked up. My program has no references to IE and the only thing it even has to do with IE is using Process.Start with a web address. My program works fine and exactly as it should, but only when IE is not opened. Does anyone know why a program would lock up only while IE is running? Edit: I've done some digging and I've found the offending thread in my program. I don't know what starts this thread or what it does, but when I kill it, my program no longer freezes. The thread is one of the CreateApplicationContext threads, here is the last few items in the stack trace of that thread. 6 ntkrnlpa.exe+0x897bc 7 ntdll.dll!KiFastSystemCallRet 8 mscorwrks.dll!LogHelp_TerminateOnAssert+0x61 9 mscorwrks.dll!DllUnregisterServerInternal+0x10523 10 mscorwrks.dll!DllUnregisterServerInternal+0x10542 11 mscorwrks.dll!StrongNameErrorInfo+0x34387 12 mscorwrks.dll!StrongNameErrorInfo+0x34815 13 mscorwrks.dll!CreateApplicationContext+0xbc35 14 KERNEL32.dll!GetModuleHandleA+0xdf Process explorer says my program is using no CPU nor throwing any exceptions while it is hung.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • QTime or QTimer wait for timeout

    - by user968243
    I'm writting a Qt application, and I have a four hour loop (in a seperate thread). In that four hour loop, I have to: do stuff with the serial port; wait a fixed period of time; do some more stuff with the serial port; wait an arbitrary amount of time. when 500ms have past, do more stuff; Go to 1. and repeat for four hours. Currently, my method of doing this is really bad, and crashes some computers. I have a whole bunch of code, but the following snippet basically shows the problem. The CPU goes to ~100% and eventually can crash the computer. void CRelayduinoUserControl::wait(int timeMS) { int curTime = loopTimer->elapsed(); while(loopTimer->elapsed() < curTime + timeMS); } I need to somehow wait for a particular amount of time to pass before continuing on with the program. Is there some function which will just wait for some arbitrary period of time while keeping all the timers going?

    Read the article

  • Huge burst of memory in c# service, what could be the cause?

    - by Daniel
    I'm working on a c# service application and i have this problem where out of no where and for no obvious reason, the memory for the process will climb from 150mb to almost 2gb in about 5 seconds and then back to 150mb. But nothing in our system should be using any where near that amount of memory (so its probably a bug somewhere). It might be a tight while true loop somewhere but the cpu usage at the time was very low so i thought i'd look for other ideas. Now the weirder thing is when i compile the service for 64bit, the same massive burst will occur except it exceeded 10gb of ram (paging most of it) and it just caused lots of problems with the computer and everything running on it. After a while it shuts down but it looks like windows is still willing to give it more memory. Would you have any ideas or tools that i can use in order to find this? Yes it has lots of logging however nothing in the logs stand out as to why this is happening. I can run the service in a console app mode, so my next test was going to be running it in visual studio debugger and see if i can find anything. It only happens occasionally but usually about 10-20 minutes after startup. On 32bit mode it cleans up and continues on like normally. 64bit mode it crashes after a while and uses stupid amounts of memory. But i'm really stumped as to why this is happening!!!

    Read the article

  • Can a pc with a 1.2 ghz dual core run VS 2013? [on hold]

    - by moo2
    Edit: This question is about a tool used for programming, Microsoft Visual Studio 2013. I already read the minimum requirements for VS 2013 before I posted or I wouldn't have asked the question. I searched on the web for an answer for a few hours before posting. Why should I go out and spend $400 on a laptop if I can spend less than $100 and use it to learn how to program? In the context of the question it is clear I was not asking for advice on what laptop to buy and it was also clear I was not asking for someone to walk me through the system requirements of VS 2013. This is a vast community and I was wondering if someone in this vast community would have experience in dealing with my question. In other words has anyone ever tried anything like it? If I had an old computer sitting around I would have tested it. At the moment all I have is a friend's Chromebook I'm borrowing. End Edit. I know the requirements for Microsoft Visual Studio 2013 state 1.6 ghz processor. I'm looking at getting a laptop for school and I found one that is old but cheap and would work for college. The Dell D430 I'm looking at getting has a 1.2ghz Core 2 Duo CPU, 80GB HD, 2GB RAM, and Windows 7 Home Premium. It's refurbished. I can get it for less than most phone's cost and from a Microsoft Authorized Refurbisher. I know it's not a skyrim rig but it's lightweight and can handle being a college laptop and doing word processing. Would Visual Studio 2013 run at all? If it is slower I'm not concerned. I just want to know if it would work and compile my assignments and run them? I'd be using this laptop for doing assignments and learning programming languages not for coding the next social media sensation.

    Read the article

  • [PHP] Does unsetting array values during iterating save on memory?

    - by saturn_rising
    Hello fellow code warriors, This is a simple programming question, coming from my lack of knowledge of how PHP handles array copying and unsetting during a foreach loop. It's like this, I have an array that comes to me from an outside source formatted in a way I want to change. A simple example would be: $myData = array('Key1' => array('value1', 'value2')); But what I want would be something like: $myData = array([0] => array('MyKey' => array('Key1' => array('value1', 'value2')))); So I take the first $myData and format it like the second $myData. I'm totally fine with my formatting algorithm. My question lies in finding a way to conserve memory since these arrays might get a little unwieldy. So, during my foreach loop I copy the current array value(s) into the new format, then I unset the value I'm working with from the original array. E.g.: $formattedData = array(); foreach ($myData as $key => $val) { // do some formatting here, copy to $reformattedVal $formattedData[] = $reformattedVal; unset($myData[$key]); } Is the call to unset() a good idea here? I.e., does it conserve memory since I have copied the data and no longer need the original value? Or, does PHP automatically garbage collect the data since I don't reference it in any subsequent code? The code runs fine, and so far my datasets have been too negligible in size to test for performance differences. I just don't know if I'm setting myself up for some weird bugs or CPU hits later on. Thanks for any insights. -sR

    Read the article

  • Power cycles on/off 3 times before booting properly from cold start, no other issues (New System)

    - by James
    Relevant Specs: Sapphire 5850, core i7 920, Seasonic x750 power supply, ECS X58B-A2 mobo. From a cold boot, meaning all power totally disconnected at the wall, the system will power on for less than a second and then power off completely. After two seconds of being powered off this will repeat and on the third "attempt" the computer will boot. To be very specific here is what happens: The power is turned on at the wall and on the psu, the orange stdby LED on the mobo is illuminated but the system is 'off'. I hit the power button on the case or on the mobo itself I hear the relay (?) in the psu closing The case light comes on and the mobo power light comes on. The fans start rotating. Immediately after this the I hear some relay click - the power lights extinguish, the fans stop, the stdby light remains on. Less than 2 seconds pass and the cycle repeats without any intervention from me. On the third attempt it boots normally and the machine runs perfectly. If I do a soft reboot or a full shutdown the computer starts normally the next time. It's only if I pull the power cord or flick the switch off on the PSU that I get the cycling again. Basically any time the stdby light on the mobo goes out. I have removed the graphics card and I get the same problem. I have removed the PSU, hotwired it to the ON position and verified voltages on all lines. The relay does not cycle when I do this. If I connect only the 24 pin ATX connector to the mobo and not the 8 pin ATX12V / CPU connector then I will not get the cycling, the fans run, the power light stays on, but obviously the system can't boot. Disconnecting all fans has no effect on the problem. My feeling it that it's something to do with the motherboard like a capacitor that's taking a long time to charge because it's leaking or something along those lines. But I can't imagine what could be 'wrong' with it and only manifest itself as a problem under these very specific circumstances. Any ideas? Thanks.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >