Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 314/1257 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Quickest way to compare a bunch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • Best method to select an object from another unknown jQuery object

    - by Yosi
    Lets say I have a jQuery object/collection stored in a variable named obj, which should contain a DOM element with an id named target. I don't know in advance if target will be a child in obj, i.e.: obj = $('<div id="parent"><div id="target"></div></div>'); or if obj equals target, i.e.: obj = $('<div id="target"></div>'); or if target is a top-level element inside obj, i.e.: obj = $('<div id="target"/><span id="other"/>'); I need a way to select target from obj, but I don't know in advance when to use .find and when to use .filter. What would be the fastest and/or most concise method of extracting target from obj? What I've come up with is: var $target = obj.find("#target").add(obj.filter("#target")); UPDATE I'm adding solutions to a JSPERF test page to see which one is the best. Currently my solution is still the fastest. Here is the link, please run the tests so that we'll have more data: http://jsperf.com/jquery-selecting-objects

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • C#/WPF FileSystemWatcher on every extension on every path

    - by BlueMan
    I need FileSystemWatcher, that can observing same specific paths, and specific extensions. But the paths could by dozens, hundreds or maybe thousand (hope not :P), the same with extensions. The paths and ext are added by user. Creating hundreds of FileSystemWatcher it's not good idea, isn't it? So - how to do it? Is it possible to watch/observing every device (HDDs, SD flash, pendrives, etc.)? Will it be efficient? I don't think so... . Every changing Windows log file, scanning file by antyvirus program - it could realy slow down my program with SystemWatcher :(

    Read the article

  • Why does Go compile quickly?

    - by Evan Kroske
    I've Googled and poked around the Go website, but I can't seem to find an explanation for Go's extraordinary build times. Are they products of the language features (or lack thereof), a highly optimized compiler, or something else? I'm not trying to promote Go; I'm just curious.

    Read the article

  • SQL query: how to translate IN() into a JOIN?

    - by tangens
    I have a lot of SQL queries like this: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o WHERE o.Id IN ( SELECT DISTINCT Id FROM table1, table2, table3 WHERE ... ) These queries have to run on different database engines (MySql, Oracle, DB2, MS-Sql, Hypersonic), so I can only use common SQL syntax. Here I read, that with MySql the IN statement isn't optimized and it's really slow, so I want to switch this into a JOIN. I tried: SELECT o.Id, o.attrib1, o.attrib2 FROM table1 o, table2, table3 WHERE ... But this does not take into account the DISTINCT keyword. Question: How do I get rid of the duplicate rows using the JOIN approach?

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • How to profile object creation in Java?

    - by gooli
    The system I work with is creating a whole lot of objects and garbage collecting them all the time which results in a very steeply jagged graph of heap consumption. I would like to know which objects are being generated to tune the code, but I can't figure out a way to dump the heap at the moment the garbage collection starts. When I tried to initiate dumpHeap via JConsole manually at random times, I always got results after GC finished its run, and didn't get any useful data. Any notes on how to track down excessive temporary object creation are welcome.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • XSLT 1.0: restrict entries in a nodeset

    - by Mike
    Hi, Being relatively new to XSLT I have what I hope is a simple question. I have some flat XML files, which can be pretty big (eg. 7MB) that I need to make 'more hierarchical'. For example, the flat XML might look like this: <D0011> .... .... and it should end up looking like this: <D0011> .... .... I have a working XSLT for this, and it essentially gets a nodeset of all the b elements and then uses the 'following-sibling' axis to get a nodeset of the nodes following the current b node (ie. following-sibling::*[position() =$nodePos]). Then recursion is used to add the siblings into the result tree until another b element is found (I have parameterised it of course, to make it more generic). I also have a solution that just sends the position in the XML of the next b node and selects the nodes after that one after the other (using recursion) via a *[position() = $nodePos] selection. The problem is that the time to execute the transformation increases unacceptably with the size of the XML file. Looking into it with XML Spy it seems that it is the 'following-sibling' and 'position()=' that take the time in the two respective methods. What I really need is a way of restricting the number of nodes in the above selections, so fewer comparisons are performed: every time the position is tested, every node in the nodeset is tested to see if its position is the right one. Is there a way to do that ? Any other suggestions ? Thanks, Mike

    Read the article

  • How to clear APC cache entries?

    - by lo_fye
    I need to clear all APC cache entries when I deploy a new version of the site. APC.php has a button for clearing all opcode caches, but I don't see buttons for clearing all User Entries, or all System Entries, or all Per-Directory Entries. Is it possible to clear all cache entries via the command-line, or some other way?

    Read the article

  • Time complexity O() of isPalindrome()

    - by Aran
    I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<s.length(); i++) { if(s.charAt(i) != s.charAt(s.length()-i-1)) { bP = false; } } return bP; } Now I know this code checks the string's characters to see whether it is the same as the one before it and if it is then it doesn't change bP. And I think I know that the operations are s.length(), s.charAt(i) and s.charAt(s.length()-i-!)). Making the time-complexity O(N + 3), I think? This correct, if not what is it and how is that figured out. Also to make this more efficient, would it be good to store the character in temporary strings?

    Read the article

  • Decent profiler for Windows?

    - by olliej
    Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows. I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones. [Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]

    Read the article

  • Definition of Connect, Processing, Waiting in apache bench.

    - by rpatel
    When I run apache bench I get results like: Command: abs.exe -v 3 -n 10 -c 1 https://mysite Connection Times (ms) min mean[+/-sd] median max Connect: 203 213 8.1 219 219 Processing: 78 177 88.1 172 359 Waiting: 78 169 84.6 156 344 Total: 281 389 86.7 391 563 I can't seem to find the definition of Connect, Processing and Waiting. What do those numbers mean?

    Read the article

  • List of divisors of an integer n (Haskell)

    - by Code-Guru
    I currently have the following function to get the divisors of an integer: -- All divisors of a number divisors :: Integer -> [Integer] divisors 1 = [1] divisors n = firstHalf ++ secondHalf where firstHalf = filter (divides n) (candidates n) secondHalf = filter (\d -> n `div` d /= d) (map (n `div`) (reverse firstHalf)) candidates n = takeWhile (\d -> d * d <= n) [1..n] I ended up adding the filter to secondHalf because a divisor was repeating when n is a square of a prime number. This seems like a very inefficient way to solve this problem. So I have two questions: How do I measure if this really is a bottle neck in my algorithm? And if it is, how do I go about finding a better way to avoid repetitions when n is a square of a prime?

    Read the article

  • Why the difference in speed?

    - by AngryHacker
    Consider this code: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long for lngRowIndex = 1 to ubound(ds.Data, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print ds.Data(lngRowIndex, lngColIndex) next next end function OK, a little context. Parameter ds is of type OtherDLL.BaseObj which is defined in a referenced ActiveX DLL. ds.Data is a variant 2-dimensional array (one dimension carries the data, the other one carries the column index. ds.Columns is a Collection of columns in 'ds.Data`. Assuming there are at least 400 rows of data and 25 columns, this code takes about 15 seconds to run on my machine. Kind of unbelievable. However if I copy the variant array to a local variable, so: function Foo(ds as OtherDLL.BaseObj) dim lngRowIndex as long dim lngColIndex as long dim v as variant v = ds.Data for lngRowIndex = 1 to ubound(v, 2) for lngColIndex = 1 to ds.Columns.Count Debug.Print v(lngRowIndex, lngColIndex) next next end function the entire thing processes in barely any noticeable time (basically close to 0). Why?

    Read the article

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • How to detect whether an EventWaitHandle is waiting?

    - by AngryHacker
    I have a fairly well multi-threaded winforms app that employs the EventWaitHandle in a number of places to synchronize access. So I have code similar to this: List<int> _revTypes; EventWaitHandle _ewh = new EventWaitHandle(false, EventResetMode.ManualReset); void StartBackgroundTask() { _ewh.Reset(); Thread t = new Thread(new ThreadStart(LoadStuff)); t.Start(); } void LoadStuff() { _revTypes = WebServiceCall.GetRevTypes() // ...bunch of other calls fetching data from all over the place // using the same pattern _ewh.Set(); } List<int> RevTypes { get { _ewh.WaitOne(); return _revTypes; } } Then I just call .RevTypes somewehre from the UI and it will return data to me when LoadStuff has finished executing. All this works perfectly correctly, however RevTypes is just one property - there are actually several dozens of these. And one or several of these properties are holding up the UI from loading in a fast manner. Short of placing benchmark code into each property, is there a way to see which property is holding the UI from loading? Is there a way to see whether the EventWaitHandle is forced to actually wait?

    Read the article

  • Is Java serialization a tool to shrink the memory footprint?

    - by Pentius
    Hey folks, does serialization in Java always have to shrink the memory that is used to hold an object structure? Or is it likely that serialization will have higher costs? In other words: Is serialization a tool to shrink the memory footprint of object structures in Java? Edit I'm totally aware of what serialization was intended for, but thanks anyway :-) But you know, tools can be misused. My question is, whether it is a good tool to decrease the memory usage. So what reasons can you imagine, why memory usage should increase/decrease? What will happen in most cases?

    Read the article

  • Scalable way of doing self join with many to many table

    - by johnathan
    I have a table structure like the following: user id name profile_stat id name profile_stat_value id name user_profile user_id profile_stat_id profile_stat_value_id My question is: How do I evaluate a query where I want to find all users with profile_stat_id and profile_stat_value_id for many stats? I've tried doing an inner self join, but that quickly gets crazy when searching for many stats. I've also tried doing a count on the actual user_profile table, and that's much better, but still slow. Is there some magic I'm missing? I have about 10 million rows in the user_profile table and want the query to take no longer than a few seconds. Is that possible?

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >