Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 168/527 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • How do I make "simple" throughput j2ee-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • Java JRE vs GCJ

    - by Martijn Courteaux
    Hi, I have this results from a speed test I wrote in Java: Java real 0m20.626s user 0m20.257s sys 0m0.244s GCJ real 3m10.567s user 3m5.168s sys 0m0.676s So, what is the but of GCJ then? With this results I'm sure I'm not going to compile it with GCJ! I tested this on Linux, are the results in Windows maybe better than that? This was the code from the application: public static void main(String[] args) { String str = ""; System.out.println("Start!!!"); for (long i = 0; i < 5000000L; i++) { Math.sqrt((double) i); Math.pow((double) i, 2.56); long j = i * 745L; String string = new String(String.valueOf(i)); string = string.concat(" kaka pipi"); // "Kaka pipi" is a kind of childly call in Dutch. string = new String(string.toUpperCase()); if (i % 300 == 0) { str = ""; } else { str += Long.toHexString(i); } } System.out.println("Stop!!!"); }

    Read the article

  • Database/NoSQL - Lowest latecy way to retreive the following data...

    - by Nickb
    I have a real estate application and a "house" contains the following information: house: - house_id - address - city - state - zip - price - sqft - bedrooms - bathrooms - geo_latitude - geo_longitude I need to perform an EXTREMELY fast (low latency) retrieval of all homes within a geo-coordinate box. Something like the SQL below (if I were to use a database): SELECT * from houses WHERE latitude IS BETWEEN xxx AND yyy AND longitude IS BETWEEN www AND zzz Question: What would be the quickest way for me to store this information so that I can perform the fastest retrieval of data based on latitude & longitude? (e.g. database, NoSQL, memcache, etc)?

    Read the article

  • High CPU - What to do.

    - by Udi Kantzuker
    I have a high CPU problem with MYSQL using "top" ( linux ) shows cpu peaks of 90%. I was trying to find the source of the problem, turned on general log and slow query log, The slow query log did not find anything. The Db contains a few small tables and one large table that contains almost 100k rows, Database Engine is MyIsam. strange thing i have noticed that on the large table, select, insert are very fast but update takes 0.2 - 0.5 secs. already used optimize and repair and no improvement. the table is being updated frequently, could this be the source of the high CPU% ? What can i do to improve this?

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • Why is my regex so much slower compiled than interpreted ?

    - by miket2e
    I have a large and complex C# regex that runs OK when interpreted, but is a bit slow. I'm trying to speed this up by setting RegexOptions.Compiled, and this seems to take about 30 seconds for the first time and instantly after that. I'm trying to negate this by compiling the regex to an assembly first, so my app can be as fast as possible. My problem is when the compiling delay takes place: Regex myComplexRegex = new Regex(regexText, RegexOptions.Compiled); MatchCollection matches = myComplexRegex.Matches(searchText); foreach (Match match in matches) // <--- when the one-time long delay kicks in { } This is making compiling to an assembly basically useless, as I still get the delay on the first foreach call. What I want is for all the compiling delay to be done in advance when I compile to the assembly, not when the user runs the app. Where am I going wrong ? (The code I'm using to compile to an assembly is similar to http://www.dijksterhuis.org/regular-expressions-advanced/ , if that's relevant ).

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • Converting keys of an array/object-tree to lowercase

    - by tstenner
    Im currently optimizing a PHP application and found one function being called around 10-20k times, so I'd thought I'd start optimization there. function keysToLower($obj) { if(!is_object($obj) && !is_array($obj)) return $obj; foreach($obj as $key=>$element) { $element=keysToLower($element); if(is_object($obj)) { $obj->{strtolower($key)}=$element; if(!ctype_lower($key)) unset($obj->{$key}); } else if(is_array($obj) && ctype_upper($key)) { $obj[strtolower($key)]=$element; unset($obj[$key]); } } return $obj; } Most of the time is spent in recursive calls (which are quite slow in PHP), but I don't see any way to convert it to a loop. What would you do?

    Read the article

  • Caching result of SELECT statement for reuse in multiple queries

    - by Andrew
    I have a reasonably complex query to extract the Id field of the results I am interested in based on parameters entered by the user. After extracting the relevant Ids I am using the resulting set of Ids several times, in separate queries, to extract the actual output record sets I want (by joining to other tables, using aggregate functions, etc). I would like to avoid running the initial query separately for every set of results I want to return. I imagine my situation is a common pattern so I am interested in what the best approach is. The database is in MS SQL Server and I am using .NET 3.5.

    Read the article

  • C: using a lot of structs can make a program slow?

    - by nunos
    I am coding a breakout clone. I had one version in which I only had one level deep of structures. This version runs at 70 fps. For more clarity in the code I decided the code should have more abstractions and created more structs. Most of the times I have two two three level deep of structures. This version runs at 30 fps. Since there are some other differences besides the structures, I ask you: Does using a lot of structs in C can slow down the code significantly? Thanks.

    Read the article

  • Optimizing Code

    - by Claudiu
    You are given a heap of code in your favorite language which combines to form a rather complicated application. It runs rather slowly, and your boss has asked you to optimize it. What are the steps you follow to most efficiently optimize the code? What strategies have you found to be unsuccessful when optimizing code? Re-writes: At what point do you decide to stop optimizing and say "This is as fast as it'll get without a complete re-write." In what cases would you advocate a simple complete re-write anyway? How would you go about designing it?

    Read the article

  • C++ Program performs better when piped

    - by ET1 Nerd
    I haven't done any programming in a decade. I wanted to get back into it, so I made this little pointless program as practice. The easiest way to describe what it does is with output of my --help codeblock: ./prng_bench --help ./prng_bench: usage: ./prng_bench $N $B [$T] This program will generate an N digit base(B) random number until all N digits are the same. Once a repeating N digit base(B) number is found, the following statistics are displayed: -Decimal value of all N digits. -Time & number of tries taken to randomly find. Optionally, this process is repeated T times. When running multiple repititions, averages for all N digit base(B) numbers are displayed at the end, as well as total time and total tries. My "problem" is that when the problem is "easy", say a 3 digit base 10 number, and I have it do a large number of passes the "total time" is less when piped to grep. ie: command ; command |grep took : ./prng_bench 3 10 999999 ; ./prng_bench 3 10 999999|grep took .... Pass# 999999: All 3 base(10) digits = 3 base(10). Time: 0.00005 secs. Tries: 23 It took 191.86701 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. An average of 0.00019 secs & 99 tries was needed to find each one. It took 159.32355 secs & 99947208 tries to find 999999 repeating 3 digit base(10) numbers. If I run the same command many times w/o grep time is always VERY close. I'm using srand(1234) for now, to test. The code between my calls to clock_gettime() for start and stop do not involve any stream manipulation, which would obviously affect time. I realize this is an exercise in futility, but I'd like to know why it behaves this way. Below is heart of the program. Here's a link to the full source on DB if anybody wants to compile and test. https://www.dropbox.com/s/6olqnnjf3unkm2m/prng_bench.cpp clock_gettime() requires -lrt. for (int pass_num=1; pass_num<=passes; pass_num++) { //Executes $passes # of times. clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time start_time = timetodouble(temp_time); //convert time to double, store as start_time for(i=1, tries=0; i!=0; tries++) { //loops until 'comparison for' fully completes. counts reps as 'tries'. <------------ for (i=0; i<Ndigits; i++) //Move forward through array. | results[i]=(rand()%base); //assign random num of base to element (digit). | /*for (i=0; i<Ndigits; i++) //---Debug Lines--------------- | std::cout<<" "<<results[i]; //---a LOT of output.---------- | std::cout << "\n"; //---Comment/decoment to disable/enable.*/ // | for (i=Ndigits-1; i>0 && results[i]==results[0]; i--); //Move through array, != element breaks & i!=0, new digits drawn. -| } //If all are equal i will be 0, nested for condition satisfied. -| clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &temp_time); //get time draw_time = (timetodouble(temp_time) - start_time); //convert time to dbl, subtract start_time, set draw_time to diff. total_time += draw_time; //add time for this pass to total. total_tries += tries; //add tries for this pass to total. /*Formated output for each pass: Pass# ---: All -- base(--) digits = -- base(10) Time: ----.---- secs. Tries: ----- (LINE) */ std::cout<<"Pass# "<<std::setw(width_pass)<<pass_num<<": All "<<Ndigits<<" base("<<base<<") digits = " <<std::setw(width_base)<<results[0]<<" base(10). Time: "<<std::setw(width_time)<<draw_time <<" secs. Tries: "<<tries<<"\n"; } if(passes==1) return 0; //No need for totals and averages of 1 pass. /* It took ----.---- secs & ------ tries to find --- repeating -- digit base(--) numbers. (LINE) An average of ---.---- secs & ---- tries was needed to find each one. (LINE)(LINE) */ std::cout<<"It took "<<total_time<<" secs & "<<total_tries<<" tries to find " <<passes<<" repeating "<<Ndigits<<" digit base("<<base<<") numbers.\n" <<"An average of "<<total_time/passes<<" secs & "<<total_tries/passes <<" tries was needed to find each one. \n\n"; return 0;

    Read the article

  • why use the "!!!"?

    - by lazyanno
    as follow codes: var a = {}; if(!!!a[tabType]){ a[tabType] = []; a[tabType].push([self,boxObj]); }else{ a[tabType].push([self,boxObj]); } i think !!!a[tabType] equals !a[tabType] why use the "!!!" not "!" ? thank you!

    Read the article

  • What is the most efficient method to find x contiguous values of y in an array?

    - by Alec
    Running my app through callgrind revealed that this line dwarfed everything else by a factor of about 10,000. I'm probably going to redesign around it, but it got me wondering; Is there a better way to do it? Here's what I'm doing at the moment: int i = 1; while ( ( (*(buffer++) == 0xffffffff && ++i) || (i = 1) ) && i < desiredLength + 1 && buffer < bufferEnd ); It's looking for the offset of the first chunk of desiredLength 0xffffffff values in a 32 bit unsigned int array. It's significantly faster than any implementations I could come up with involving an inner loop. But it's still too damn slow.

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • What does CPU Time consist of? [closed]

    - by Sid
    What does CPU time exactly consist of? For instance, is the time taken to access a page from the RAM (at which point, the CPU is most likely idling) part of the CPU time? I'm not talking about fetching the page from the disk here, just fetching it from the RAM. Thanks

    Read the article

  • Slow loading of UITableView. How know why?

    - by mamcx
    I have a UITableView that show a long list of data. Use sections and follow the sugestion of http://stackoverflow.com/questions/695814/how-solve-slow-scrolling-in-uitableview . The flow is load a main UITableView & push a second selecting a row from there. However, with 3000 items take 11 seconds to show. I suspect first from the load of the records from sqlite (I preload the first 200). So I cut it to only 50. However, no matter if I preload only 1 or 500, the time is the same. The view is made from IB and all is opaque. I run out of ideas in how detect the problem. I run the Instruments tool but not know what to look. Also, when the user select a cell from the previous UITable, no visual feedback is show (ie: the cell not turn selected) for a while so he thinks he not select it and try several times. Is related to this problem. What to do? NOTE: The problem is only in the actual device: iPod Touch 2d generation Using fmdb as sqlite api Doing the caching in viewDidLoad Using NSDictionary for the caching Using a NSAutoreleasePool for the caching part. Only caching the row ID & mac 4 fields necesary to show the cell data UIView made with interface builder, SDK 2.2.1 Instruments say I use 2.5 MB in the device

    Read the article

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • How to measure how long is a function running?

    - by rhose87
    I want to see how long a function is running. So I added a timer object on my form and I came out with this code: private int counter = 0; //inside button click I have: timer = new Timer(); timer.Tick += new EventHandler(timer_Tick); timer.Start(); Result result = new Result(); result = new GeneticAlgorithms().TabuSearch(parametersTabu, functia); timer.Stop(); and: private void timer_Tick(object sender, EventArgs e) { counter++; btnTabuSearch.Text = counter.ToString(); } But this is not counting anything. Any ideas?

    Read the article

  • How to speed up drawing of scaled image? Audio playback chokes during window resize.

    - by Paperflyer
    I am writing an audio player for OSX. One view is a custom view that displays a waveform. The waveform is stored as a instance variable of type NSImage with an NSBitmapImageRep. The view also displays a progress indicator (a thick red line). Therefore, it is updated/redrawn every 30 milliseconds. Since it takes a rather long time to recalculate the image, I do that in a background thread after every window resize and update the displayed image once the new image is ready. In the meantime, the original image is scaled to fit the view like this: // The drawing rectangle is slightly smaller than the view, defined by // the two margins. NSRect drawingRect; drawingRect.origin = NSMakePoint(sideEdgeMarginWidth, topEdgeMarginHeight); drawingRect.size = NSMakeSize([self bounds].size.width-2*sideEdgeMarginWidth, [self bounds].size.height-2*topEdgeMarginHeight); [waveform drawInRect:drawingRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1]; The view makes up the biggest part of the window. During live resize, audio starts choking. Selecting the "big" graphic card on my Macbook Pro makes it less bad, but not by much. CPU utilization is somewhere around 20-40% during live resizes. Instruments suggests that rescaling/redrawing of the image is the problem. Once I stop resizing the window, CPU utilization goes down and audio stops glitching. I already tried to disable image interpolation to speed up the drawing like this: [[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone]; That helps, but audio still chokes during live resizes. Do you have an idea how to improve this? The main thing is to prevent the audio from choking.

    Read the article

  • C# Dictionary Loop Enhancment

    - by Toto
    Hi, I have a dictionary with around 1 milions items. I am constantly looping throw the dictionnary : public void DoAllJobs() { foreach (KeyValuePair<uint, BusinessObject> p in _dictionnary) { if(p.Value.MustDoJob) p.Value.DoJob(); } } The execution is a bit long, around 600 ms, I would like to deacrese it. Here is the contraints : MustDoJob values mostly stay the same beetween two calls to DoAllJobs() 60-70% of the MustDoJob values == false From time to times MustDoJob change for 200 000 pairs. Some p.Value.DoJob() can not be computed at the same time (COM object call) Here, I do not need the key part of the _dictionnary objet but I really do need it somewhere else I wanted to do the following : Parallelizes but I am not sure is going to be effective due to 4. Sorts the dictionnary since 1. and 2. (and stop want I find the first MustDoJob == false) but I am wondering what 3. would result in I did not implement any of the previous ideas since it could be a lot of job and I would like to investigate others options before. So...any ideas ?

    Read the article

  • Best memory settings for eclipse 4.2 (STS 3.1) on Windows 7 64 bit?

    - by jorrebor
    I apoligize in advance if this question is indeed too subjective as SO warns me. My workstation has 8 gb of ram and runs windows 7 64 bit. I use the Spring tool Suite (3.1) but as soon as i am starting to open and modify the spring config (.xml) files, STS becomes incredibly slow. I already tried switching off "build automatically" and to increase memory settings but no luck. How should i change my .ini ? this is what i have set now: -vm C:/Program Files/Java/jdk1.7.0_07/bin/javaw.exe -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20120522-1813 -product org.springsource.sts.ide --launcher.defaultAction openFile --launcher.XXMaxPermSize 4096M -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx2048m -XX:MaxPermSize=512m My collageu running the same project in IntelliJ, has no problems. Thank you!

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >