Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 145/537 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • How to know why an animation stutters?

    - by Patrick Klug
    I have a few fairly simple animations (moving text around, moving ellipses etc.) and running in full screen (1920x1080 minus the task bar) the WPF Performance Suite reports a good framerate around 50 FPS throughout the animation. Dirty Rect Addition is somewhere around 300 rect/s, the SW frames are between 0 and 4 and the HW frames are between 3 and 5. Video memory usage is around 80 MB. Problem is that the animations stutters every other half second. My machine is a new Dell laptop XPS 15 with the GeForce GT 435 with 2GB memory. - The drivers are up to date. (The same behavior occurs on my netbook (in full screen) as well so I don't think it is hardware related.) If I make the window smaller the stutter goes away. The stutter occurs with the simplest of animations - even with just a couple of elements but adding more elements certainly makes it more noticeable. How can I find out what causes this stutter? When I think of it, I have not actually seen any WPF animations which run smoothly in full screen. Is this even possible?

    Read the article

  • Parsing multiple files at a time in Perl

    - by sfactor
    I have a large data set (around 90GB) to work with. There are data files (tab delimited) for each hour of each day and I need to perform operations in the entire data set. For example, get the share of OSes which are given in one of the columns. I tried merging all the files into one huge file and performing the simple count operation but it was simply too huge for the server memory. So, I guess I need to perform the operation each file at a time and then add up in the end. I am new to perl and am especially naive about the performance issues. How do I do such operations in a case like this. As an example two columns of the file are. ID OS 1 Windows 2 Linux 3 Windows 4 Windows Lets do something simple, counting the share of the OSes in the data set. So, each .txt file has millions of these lines and there are many such files. What would be the most efficient way to operate on the entire files.

    Read the article

  • Why Enumerable.Range is faster than a direct yield loop?

    - by Morgan Cheng
    Below code is checking performance of three different ways to do same solution. public static void Main(string[] args) { // for loop { Stopwatch sw = Stopwatch.StartNew(); int accumulator = 0; for (int i = 1; i <= 100000000; ++i) { accumulator += i; } sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, accumulator); } //Enumerable.Range { Stopwatch sw = Stopwatch.StartNew(); var ret = Enumerable.Range(1, 100000000).Aggregate(0, (accumulator, n) => accumulator + n); sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, ret); } //self-made IEnumerable<int> { Stopwatch sw = Stopwatch.StartNew(); var ret = GetIntRange(1, 100000000).Aggregate(0, (accumulator, n) => accumulator + n); sw.Stop(); Console.WriteLine("time = {0}; result = {1}", sw.ElapsedMilliseconds, ret); } } private static IEnumerable<int> GetIntRange(int start, int count) { int end = start + count; for (int i = start; i < end; ++i) { yield return i; } } } The result is like this: time = 306; result = 987459712 time = 1301; result = 987459712 time = 2860; result = 987459712 It is not surprising that "for loop" is faster than the other two solutions, because Enumerable.Aggregate takes more method invocations. However, it really surprises that "Enumerable.Range" is faster than the "self-made IEnumerable". I thought that Enumerable.Range will take more overhead than the simple GetIntRange method. What is the possible reason for this?

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • jquery callback functions failing to finish execution

    - by calumbrodie
    I'm testing a jquery app i've written and have come across some unexpected behaviour $('button.clickme').live('click',function(){ //do x (takes 2 seconds) //do y (takes 4 seconds) //do z (takes 0.5 seconds) }) The event can be triggered by a number of buttons. What I'm finding is that when I click each button slowly (allowing 10 seconds between clicks) - my callback function executes correctly (actions x, y & z complete). However If I rapidly click buttons on my page it appears that the function sometimes only completes up to step x or y before terminating. My question: Is it the case that if this function is fired by a clicking second DOM element, while the first callback function is completing - will jQuery terminate the first callback and start over again? Do I have to write my callback function explicitly outside the event handler and then call it?? function doStuff() { //do x //do y //do z ( } $('button.clickme).live('click',doStuff()) If this is the case can someone explain why this is happening or give me a link to some advice on best practice on closures etc - I'd like to know the BEST way to write jQuery to improve performance etc. Thanks

    Read the article

  • JavaScript - Efficiently find all elements containing one of a large set of strings

    - by noah
    I have a set of strings and I need to find all all of the occurrences in an HTML document. Where the string occurs is important because I need to handle each case differently: String is all or part of an attribute. e.g., the string is foo: <input value="foo"> - Add class ATTR to the element. String is the full text of an element. e.g., <button>foo</button> - Add class TEXT to the element. String is inline in the text of an element. e.g., <p>I love foo</p> - Wrap the text in a span tag with class TEXT. Also, I need to match the longest string first. e.g., if I have foo and foobar, then <p>I love foobar</p> should become <p>I love <span class="TEXT">foobar</span></p>, not <p>I love <span class="TEXT">foo</span>bar</p>. The inline text is easy enough: Sort the strings descending by length and find and replace each in document.body.innerHTML with <span class="TEXT">$1</span>, although I'm not sure if that is the most efficient way to go. For the attributes, I can do something like this: sortedStrings.each(function(it) { document.body.innerHTML.replace(new RegExp('(\S+?)="[^"]*'+escapeRegExChars(it)+'[^"]*"','g'),function(s,attr) { $('[+attr+'*='+it+']').addClass('ATTR'); }); }); Again, that seems inefficient. Lastly, for the full text elements, a depth first search of the document that compares the innerHTML to each string will work, but for a large number of strings, it seems very inefficient. Any answer that offers performance improvements gets an upvote :)

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • App is fast on 3GS but slow on 3G

    - by Anthony Chan
    Hi all, I'm new to computer coding and have just finished coding an app and tested it on both 3G and 3GS. On 3GS, it worked as normal as on the simulator. However, when I tried to run it on 3G, the app becomes extremely slow. I'm not sure what's the reason and I hope someone could shed some light on me. Generally, my app has a couple of view controller classes, with one of them being the title page, one being the main page, one is setting, etc. I used a dissolve to transition from the title page to the main page. But even this simple transition shows un-smooth performance on a 3G! My other part of the app involves zooming in to some images by scaling up the images, switching images by push or dissolve upon receiving touch events, saving photos into photo library and storing and retrieving some photos in a folder and some data in a SQlite database, each showing un-smooth actions. Compared with some heavy graphic or heavy maths app, I think mine is pretty simple. I totally have no clue why the app would behave so slow and un-smooth that it is barely useful on a 3G. Any help/ direction would be much appreciated. Thanks for helping out.

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • i read that for RESTful websites. it is not good to use $_SESSION. Why is it not good? how then do i

    - by keisimone
    I read that it is not good to use $_SESSION. http://www.recessframework.org/page/towards-restful-php-5-basic-tips I am creating a WEBSITE, not web service in PHP. and i am trying to make it more RESTful. at least in spirit. right now i am rewriting all the action to use Form tags POST and add in a hidden value called _method which would be "delete" for deleting action and "put" for updating action. however, i am not sure why it is recommended NOT to use $_SESSION. i would like to know why and what can i do to improve. To allow easy authorization checking, what i did was to after logging in the user, the username is stored in the $_SESSION. Everytime the user navigates to a page, the page would check if the username is stored inside $_SESSION and then based on the $_SESSION retrieves all the info including privileges from the database and then evaluates the authorization to access the page based on the info retrieved. Is the way I am implementing bad? not RESTful? how do i improve performance and security? Thank you.

    Read the article

  • Optimize a views drawing code

    - by xon1c
    Hi, in a simple drawing application I have a model which has a NSMutableArray curvedPaths holding all the lines the user has drawn. A line itself is also a NSMutableArray, containing the point objects. As I draw curved NSBezier paths, my point array has the following structure: linePoint, controlPoint, controlPoint, linePoint, controlPoint, controlPoint, etc... I thought having one array holding all the points plus control points would be more efficient than dealing with 2 or 3 different arrays. Obviously my view draws the paths it gets from the model, which leads to the actual question: Is there a way to optimize the following code (inside the view's drawRect method) in terms of speed? int lineCount = [[model curvedPaths] count]; // Go through paths for (int i=0; i < lineCount; i++) { // Get the Color NSColor *theColor = [model getColorOfPath:[[model curvedPaths] objectAtIndex:i]]; // Get the points NSArray *thePoints = [model getPointsOfPath:[[model curvedPaths] objectAtIndex:i]]; // Create a new path for performance reasons NSBezierPath *path = [[NSBezierPath alloc] init]; // Set the color [theColor set]; // Move to first point without drawing [path moveToPoint:[[thePoints objectAtIndex:0] myNSPoint]]; int pointCount = [thePoints count] - 3; // Go through points for (int j=0; j < pointCount; j+=3) { [path curveToPoint:[[thePoints objectAtIndex:j+3] myNSPoint] controlPoint1:[[thePoints objectAtIndex:j+1] myNSPoint] controlPoint2:[[thePoints objectAtIndex:j+2] myNSPoint]]; } // Draw the path [path stroke]; // Bye stuff [path release]; [theColor release]; } Thanks, xonic

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • List of Big-O for PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Fast (de)serialization on iPhone

    - by Jacob Kuypers
    I'm developing a game/engine for iPhone OS. It's the first time I'm using Objective-C. I made my own binary format for geometry data and for textures I'm focusing on PVRTC. That should be the optimal approach as far as speed and space are concerned. I really want to keep loading time to a minimum and - if possible - be able to save very fast as well. So now I'm trying to make my "Entity" stuff persistent without sacrificing performance. First I wanted to use NSKeyedArchiver. From what I've heard, it's not very fast. Also, what I want to serialize is mostly structs made of floats with some ints and strings, so there isn't really a need for all that "object graph" overhead. NSArchiver would have been more appropriate, but they kicked that off the iphone for some reason. So now I'm thinking about making my own serialization scheme again. Am I wrong in thinking that NSKeyedArchiver is slow (I only read that, haven't tested it myself)? If so, what's the best way to encode/decode structs (with no pointers, mostly floats) without sacrificing speed?

    Read the article

  • Simple question about the lunarlander example.

    - by Smills
    I am basing my game off the lunarlander example. This is the run loop I am using (very similar to what is used in lunarlander). I am getting considerable performance issues associated with my drawing, even if I draw almost nothing. I noticed the below method. Why is the canvas being created and set to null each cycle? @Override public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas();//null synchronized (mSurfaceHolder) { updatePhysics(); doDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } Most of the times I have read anything about canvases it is more along the lines of: mField = new Bitmap(...dimensions...); Canvas c = new Canvas(mField); My question is: why is Google's example done that way (null canvas), what are the benefits of this, and is there a faster way to do it?

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Is it possible to return a list of numbers from a Sybase function?

    - by ps_rs4
    I'm trying to overcome a very serious performance issue in which Sybase refuses to use the primary key index on a large table because one of the required fields is specified indirectly through another table - or, in other words; SELECT ... FROM BIGTABLE WHERE KFIELD = 123 runs in ms but SELECT ... FROM BIGTABLE, LTLTBL WHERE KFIELD = LTLTBL.LOOKUP AND LTLTBL.UNIQUEID = 'STRINGREPOF123' takes 30 - 40 seconds. I've managed to work around this first problem by using a function that basically lets me do this; SELECT ... FROM BIGTABLE WHERE KFIELD = MYFUNC('STRINGREPOF123') which also runs in ms. The problem, however, is that this approach only works when there is a single value returned by MYFUNCT but I have some cases where it may return 2 or 3 values. I know that the SQL SELECT ... FROM BIGTABLE WHERE KFIELD IN (123,456,789) also returns in millis so I'd like to have a function that returns a list of possible values rather than just a single one - is this possible? Sadly the application is running on Sybase ASA 9. Yes I know it is old and is scheduled to be refreshed but there's nothing I can do about it now so I need logic that will work with this version of the DB. Thanks in advance for any assistance on this matter.

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • iPhone: Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Which are the most useful techniques for faster Bluetooth?

    - by Mike Howard
    Hi. I'm adding peer-to-peer bluetooth using GameKit to an iPhone shoot-em-up, so speed is vital. I'm sending about 40 messages a second each way, most of them with the faster GKSendDataUnreliable, all serializing with NSCoding. In testing between a 3G and 3GS, this is slowing the 3G down a lot more than I'd like. I'm wondering where I should concentrate my efforts to speed it up. How much slower is GKSendDataReliable? For the few packets that have to get there, would it be faster to send a GKSendDataUnreliable and have the peer send an acknowledgement so I can send again if I don't get the Ack within, say, 100ms? How much faster would it be to create the NSData instance using a regular C array rather than archiving with the NSCoding protocol? Is this serialization process (for about a dozen floats) just as slow as you'd expect from an object creation/deallocation overhead, or is something particularly slow happening? I heard that (for example) sending four seperate sets of data is much, much slower, than sending one piece of data four times the size. Would I make a significant saving by sending separate packets of data that wouldn't always go together in the same packet when they happen at the same time? Are there any other bluetooth performance secrets I've missed? Thanks for your help.

    Read the article

  • Should try...catch go inside or outside a loop?

    - by mmyers
    I have a loop that looks something like this: for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } This is the main content of a method whose sole purpose is to return the array of floats. I want this method to return null if there is an error, so I put the loop inside a try...catch block, like this: try { for(int i = 0; i < max; i++) { String myString = ...; float myNum = Float.parseFloat(myString); myFloats[i] = myNum; } } catch (NumberFormatException ex) { return null; } But then I also thought of putting the try...catch block inside the loop, like this: for(int i = 0; i < max; i++) { String myString = ...; try { float myNum = Float.parseFloat(myString); } catch (NumberFormatException ex) { return null; } myFloats[i] = myNum; } So my question is: is there any reason, performance or otherwise, to prefer one over the other? EDIT: The consensus seems to be that it is cleaner to put the loop inside the try/catch, possibly inside its own method. However, there is still debate on which is faster. Can someone test this and come back with a unified answer? (EDIT: did it myself, but voted up Jeffrey and Ray's answers)

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >