Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 50/837 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Periodic GPU performance problem

    - by Peter Lillevold
    Hi folks! I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts: It (currently) happens on my machine only Removing everything from my render loop does not improve the problem In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present. Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop. PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing? Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM. Note2: also posted this question on the XNA Creators forums here.

    Read the article

  • Performance: float to int cast and clippling result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT)

    Read the article

  • JDBC batch insert performance

    - by wo_shi_ni_ba_ba
    I need to insert a couple hundreds of millions of records into the mysql db. I'm batch inserting it 1 million at a time. Please see my code below. It seems to be slow. Is there any way to optimize it? try { // Disable auto-commit connection.setAutoCommit(false); // Create a prepared statement String sql = "INSERT INTO mytable (xxx), VALUES(?)"; PreparedStatement pstmt = connection.prepareStatement(sql); Object[] vals=set.toArray(); for (int i=0; i

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • mySQL Inconsistent Performance

    - by Jon Hatfield
    Hi, I'm running a mySQL query that joins various tables of 500,000+ rows. Sometimes it takes a second, other times around 15 seconds! This is on my local machine. I have experienced similarly varied times before on other intensive queries, does anyone know why this is? Thanks

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • PropertyUtils performance

    - by mR_fr0g
    I have a problem where i need to walk through an object graph and pick out a particular property value. My original solution caches a linked list of property names that need to be applied in order to get from point A to point B in the object graph. I then use apache commons PropertyUtils to iterate through the linked list calling getProperty(Object bean, String name) until i have reached point B. My question is around how this will perform compared to perhaps cahing the Method objects for each step. What is propertyUtils doing under the bonnet? Is it doing a lot of reflection / heavy lifting?

    Read the article

  • Objective C iPhone performance issue

    - by Asad Khan
    Ok guys I am developing an iPhone app I have a Model class which follows a Singleton design pattern. Now I have an NSArray in it which is initialized to around some 1000 NSStrings in the init method. Now I need to use this data in some view controller. so I import Model.h, I create an array of NSString objects in view controller & set the data to it. But now the problem is that now I have 2000 NSStrings currently allocated, which I believe is not a good thing on iPhone due to memory considerations. releasing model object wont help because I've overrided release method to release nothing according to the pattern & I cannot change the design now because now a lot of code works on the assumption of model being a singleton. & in future maybe the initial NSStrings may grow to 2000 or even more & then I'll have 4000 NSStrings allocated at one time .... I am a little confused on how to go about it any suggestions

    Read the article

  • Strange performance behaviour

    - by plastilino
    I'm puzzled with this. In my machine Direct calculation: 375 ms Method calculation: 3594 ms, about TEN times SLOWER If I place the method calulation BEFORE the direct calculation, both times are SIMILAR. Woud you check it in your machine? class Test { static long COUNT = 50000 * 10000; private static long BEFORE; /*--------METHOD---------*/ public static final double hypotenuse(double a, double b) { return Math.sqrt(a * a + b * b); } /*--------TIMER---------*/ public static void getTime(String text) { if (BEFORE == 0) { BEFORE = System.currentTimeMillis(); return; } long now = System.currentTimeMillis(); long elapsed = (now - BEFORE); BEFORE = System.currentTimeMillis(); if (text.equals("")) { return; } String message = "\r\n" + text + "\r\n" + "Elapsed time: " + elapsed + " ms"; System.out.println(message); } public static void main(String[] args) { double a = 0.2223221101; double b = 122333.167; getTime(""); /*--------DIRECT CALCULATION---------*/ for (int i = 1; i < COUNT; i++) { Math.sqrt(a * a + b * b); } getTime("Direct: "); /*--------METHOD---------*/ for (int k = 1; k < COUNT; k++) { hypotenuse(a, b); } getTime("Method: "); } }

    Read the article

  • C++ Performance/memory optimization guidelines

    - by ML
    Hi All, Does anyone have a resource for C++ memory optimization guidelines? Best practices, tuning, etc? As an example: Class xxx { public: xxx(); virtual ~xxx(); protected: private: }; Would there be ANY benefit on the compiler or memory allocation to get rid of protected and private since there there are no items that are protected and private in this class?

    Read the article

  • HTML 5 performance on Firefox ?

    - by asksuperuser
    I tried this sample here: http://9elements.com/io/projects/html5/canvas/ After a few minutes, Firefox slows down so much I can't even popup any menu. When I closed the tab, Firefox comes back to normal again. So is HTML 5 really a good choice now ?

    Read the article

  • C++ application as a service with high performance

    - by sand
    I need to provide a C++ application as a service. Client of the service and the service can be on the same machine or distributed on different machines based on the load. This application takes a ~2KB string as input and returns almost similar size of string after some processing. Turnaround time for the client should be really quick. What is the best mechanism to implement this?

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • C# Performance on Errors

    - by pm_2
    It would appear that catching an error is slower that performing a check prior to the error (for example a TryParse). The related questions that prompt this observation are here and here. Can anyone tell me why this is so - why is it more costly to catch an error that to perform one or many checks of the data to prevent the error?

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • Objective-C vs JavaScript loop performance

    - by micadelli
    I have a PhoneGap mobile application that I need to generate an array of match combinations. In JavaScript side, the code hanged pretty soon when the array of which the combinations are generated from got a bit bigger. So, I thought I'll make a plugin to generate the combinations, passing the array of javascript objects to native side and loop it there. To my surprise the following codes executes in 150 ms (JavaScript) whereas in native side (Objective-C) it takes ~1000 ms. Does anyone know any tips for speeding up those executing times? When players exceeds 10, i.e. the length of the array of teams equals 252 it really gets slow. Those execution times mentioned above are for 10 players / 252 teams. Here's the JavaScript code: for (i = 0; i < GAME.teams.length; i += 1) { for (j = i + 1; j < GAME.teams.length; j += 1) { t1 = GAME.teams[i]; t2 = GAME.teams[j]; if ((t1.mask & t2.mask) === 0) { GAME.matches.push({ Team1: t1, Team2: t2 }); } } } ... and here's the native code: NSArray *teams = [[NSArray alloc] initWithArray: [options objectForKey:@"teams"]]; NSMutableArray *t = [[NSMutableArray alloc] init]; int mask_t1; int mask_t2; for (NSInteger i = 0; i < [teams count]; i++) { for (NSInteger j = i + 1; j < [teams count]; j++) { mask_t1 = [[[teams objectAtIndex:i] objectForKey:@"mask"] intValue]; mask_t2 = [[[teams objectAtIndex:j] objectForKey:@"mask"] intValue]; if ((mask_t1 & mask_t2) == 0) { [t insertObject:[teams objectAtIndex:i] atIndex:0]; [t insertObject:[teams objectAtIndex:j] atIndex:1]; /* NSArray *newCombination = [[NSArray alloc] initWithObjects: [teams objectAtIndex:i], [teams objectAtIndex:j], nil]; */ [combinations addObject:t]; } } } ... the array in question (GAME.teams) looks like this: { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 1; mask = 2; name = B; score = 0; } ); mask = 3; name = A; }, { count = 2; full = 1; list = ( { index = 0; mask = 1; name = A; score = 0; }, { index = 2; mask = 4; name = C; score = 0; } ); mask = 5; name = A; },

    Read the article

  • Python performance profiling (file close)

    - by user1853986
    First of all thanks for your attention. My question is how to reduce the execution time of my code. Here is the relevant code. The below code is called in iteration from the main. def call_prism(prism_input_file,random_length): prism_output_file = "path.txt" cmd = "prism %s -simpath %d %s" % (prism_input_file,random_length,prism_output_file) p = os.popen(cmd) p.close() return prism_output_file def main(prism_input_file, number_of_strings): ... for n in range(number_of_strings): prism_output_file = call_prism(prism_input_file,z[n]) ... return I used statistics from the "profile statistics browser" when I profiled my code. The "file close" system command took the maximum time (14.546 seconds). The call_prism routine is called 10 times. But the number_of_strings is usually in thousands, so, my program takes lot of time to complete. Let me know if you need more information. By the way I tried with subprocess, too. Thanks.

    Read the article

  • How can I improve the performance of this algorithm

    - by Justin
    // Checks whether the array contains two elements whose sum is s. // Input: A list of numbers and an integer s // Output: return True if the answer is yes, else return False public static boolean calvalue (int[] numbers, int s){ for (int i=0; i< numbers.length; i++){ for (int j=i+1; j<numbers.length;j++){ if (numbers[i] < s){ if (numbers[i]+numbers[j] == s){ return true; } } } } return false; }

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

  • C/C++ variable length automatic array performance

    - by aaa
    hello. Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc range of N maybe from 1kb to 16kb roughly, stack overrun is not a problem Thank you

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >