Search Results

Search found 1091 results on 44 pages for 'efficiency'.

Page 14/44 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Log4J - Speed of resolving class/method/line references

    - by Jeach
    Does log4J still gather the class, method and line numbers by generating exceptions and inspecting the stack trace? Or has Java been optimized since Sun included their own logging framework. If not, why has there not been any optimizations made since. What is the main challenges in obtaining class, method and line numbers quickly and efficiently? Although I hate annotations and try to avoid them, has log4J not made use of this, such as: @log4j-class MyClass @log4j-method currentMethodOne At least this would avoid some companies bad habit of repeatedly writing/copying the method name as the first part of their logging message (which is seriously annoying). Thanks, Jeach!

    Read the article

  • Efficiently store last X items in an MySQL Database

    - by Saif Bechan
    I want to store the last 3 items in an MySQL database in an efficient way. So when the 4th item is stored the first should be deleted. The way I do this not is first run a query getting the items. Than check what I should do then insert/delete. There has to be a better way to do this. Any suggestions?

    Read the article

  • How to make this JavaScript much faster?

    - by Ralph
    Still trying to answer this question, and I think I finally found a solution, but it runs too slow. var $div = $('<div>') .css({ 'border': '1px solid red', 'position': 'absolute', 'z-index': '65535' }) .appendTo('body'); $('body *').live('mousemove', function(e) { var topElement = null; $('body *').each(function() { if(this == $div[0]) return true; var $elem = $(this); var pos = $elem.offset(); var width = $elem.width(); var height = $elem.height(); if(e.pageX > pos.left && e.pageY > pos.top && e.pageX < (pos.left + width) && e.pageY < (pos.top + height)) { var zIndex = document.defaultView.getComputedStyle(this, null).getPropertyValue('z-index'); if(zIndex == 'auto') zIndex = $elem.parents().length; if(topElement == null || zIndex > topElement.zIndex) { topElement = { 'node': $elem, 'zIndex': zIndex }; } } }); if(topElement != null ) { var $elem = topElement.node; $div.offset($elem.offset()).width($elem.width()).height($elem.height()); } }); It basically loops through all the elements on the page and finds the top-most element beneath the cursor. Is there maybe some way I could use a quad-tree or something and segment the page so the loop runs faster?

    Read the article

  • C++ - Efficient way to iterate over the contents of a vector?

    - by Francisco P.
    Hello, everyone! I am implementing a text-based version of Scrabble for a college project. I have a vector containing around 400K strings (my dictionary), and, at some point in every turn, I'm going to have to check if any word in the dictionary can be formed with the pieces in the player's hand. My only solution to this is iterating through the string, one by one, and using a sub-routine I have to check if the string in question can be formed from the player's pieces. I'll implement a quickfail checking if the user has any vowels, but it'll still be woefully inefficient. Any suggestions? Thanks for your time!

    Read the article

  • Calculating all distances between one point and a group of points efficiently in R

    - by dbarbosa
    Hi, First of all, I am new to R (I started yesterday). I have two groups of points, data and centers, the first one of size n and the second of size K (for instance, n = 3823 and K = 10), and for each i in the first set, I need to find j in the second with the minimum distance. My idea is simple: for each i, let dist[j] be the distance between i and j, I only need to use which.min(dist) to find what I am looking for. Each point is an array of 64 doubles, so > dim(data) [1] 3823 64 > dim(centers) [1] 10 64 I have tried with for (i in 1:n) { for (j in 1:K) { d[j] <- sqrt(sum((centers[j,] - data[i,])^2)) } S[i] <- which.min(d) } which is extremely slow (with n = 200, it takes more than 40s!!). The fastest solution that I wrote is distance <- function(point, group) { return(dist(t(array(c(point, t(group)), dim=c(ncol(group), 1+nrow(group)))))[1:nrow(group)]) } for (i in 1:n) { d <- distance(data[i,], centers) which.min(d) } Even if it does a lot of computation that I don't use (because dist(m) computes the distance between all rows of m), it is way more faster than the other one (can anyone explain why?), but it is not fast enough for what I need, because it will not be used only once. And also, the distance code is very ugly. I tried to replace it with distance <- function(point, group) { return (dist(rbind(point,group))[1:nrow(group)]) } but this seems to be twice slower. I also tried to use dist for each pair, but it is also slower. I don't know what to do now. It seems like I am doing something very wrong. Any idea on how to do this more efficiently? ps: I need this to implement k-means by hand (and I need to do it, it is part of an assignment). I believe I will only need Euclidian distance, but I am not yet sure, so I will prefer to have some code where the distance computation can be replaced easily. stats::kmeans do all computation in less than one second.

    Read the article

  • C++ struct sorting

    - by Betamoo
    I have a vector of custom Struct that needs to be sorted on different criteria each time Implementing operator < will allow only one criteria But I want to be able to specify sorting criteria each time I call C++ standard sort. How to do that? Please note it is better to be efficient in running time.. Thanks

    Read the article

  • How to pass a file (read from Java) most effectively to a native method?

    - by soc
    Hi, I have approx. 30000 files (1MB each) which I want to put into a native method, which requires just an byte array and the size of it as arguments. I looked through some examples and benchmarks (like http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly) but all of them do some other fancy things. Basically I don't care about the contents of the file, I don't want to access something in that file or the byte array or do anything else with it. I just want to put a file into a native method which accepts an byte array as fast as possible. At the moment I'm using RandomAccessFile, but that's horribly slow (10MB/s). Is there anything like byte[] readTheWholeFile(File file){ ... } which I could put into native void fancyCMethod(readTheWholeFile(myFile), myFile.length()) What would you suggest?

    Read the article

  • Fastest method in merging of the two: dicts vs lists

    - by tipu
    I'm doing some indexing and memory is sufficient but CPU isn't. So I have one huge dictionary and then a smaller dictionary I'm merging into the bigger one: big_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1}} smaller_dict = {"the" : {"6" : 1, "7" : 1}} #after merging resulting_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1, "6" : 1, "7" : 1}} My question is for the values in both dicts, should I use a dict (as displayed above) or list (as displayed below) when my priority is to use as much memory as possible to gain the most out of my CPU? For clarification, using a list would look like: big_dict = {"the" : [1, 2, 3, 4, 5]} smaller_dict = {"the" : [6,7]} #after merging resulting_dict = {"the" : [1, 2, 3, 4, 5, 6, 7]} Side note: The reason I'm using a dict nested into a dict rather than a set nested in a dict is because JSON won't let me do json.dumps because a set isn't key/value pairs, it's (as far as the JSON library is concerned) {"a", "series", "of", "keys"} Also, after choosing between using dict to a list, how would I go about implementing the most efficient, in terms of CPU, method of merging them? I appreciate the help.

    Read the article

  • C++ - How to efficiently find out if any string in a vector can be assembled from a set of letters

    - by Francisco P.
    Hello, everyone! I am implementing a text-based version of Scrabble for a college project. I have a vector containing around 400K strings (my dictionary), and, at some point in every turn, I'm going to have to check if there's still a word in the dictionary which can be formed with the pieces in the player's hand. I'm checking if the player has any move left... If not, it's game over for the player in question... My only solution to this is iterating through the string, one by one, and using a sub-routine I have to check if the string in question can be formed from the player's pieces. I'll implement a quickfail checking if the user has any vowels, but it'll still be woefully inefficient. Any suggestions? Thanks for your time!

    Read the article

  • What's the most efficient way to combine two List(Of String)?

    - by Jason Towne
    Let's say I've got: Dim los1 as New List(Of String) los1.Add("Some value") Dim los2 as New List(Of String) los2.Add("More values") What would be the most efficient way to combine the two into a single List(Of String)? Edit: While I'm loving the solutions everyone has provided so far, I probably should also mention I'm stuck using the .NET 2.0 framework. Any other suggestions?

    Read the article

  • How do I most efficienty check the unique elements in a list?

    - by alex
    let's say I have a list li = [{'q':'apple','code':'2B'}, {'q':'orange','code':'2A'}, {'q':'plum','code':'2A'}] What is the most efficient way to return the count of unique "codes" in this list? In this case, the unique codes is 2, because only 2B and 2A are unique. I could put everything in a list and compare, but is this really efficient?

    Read the article

  • How to efficiently get highest & lowest values from a List<double?>, and then modify them?

    - by DaveDev
    I have to get the sum of a list of doubles. If the sum is 100, I have to decrement from the highest number until it's = 100. If the sum is < 100, I have to increment the lowest number until it's = 100. I can do this by looping though the list, assigning the values to placeholder variables and testing which is higher or lower but I'm wondering if any gurus out there could suggest a super cool & efficient way to do this? The code below basically outlines what I'm trying to achieve: var splitValues = new List<double?>(); splitValues.Add(Math.Round(assetSplit.EquityTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.PropertyTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.FixedInterestTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.CashTypeSplit() ?? 0)); var listSum = splitValues.Sum(split => split.Value); if (listSum != 100) { if (listSum > 100) { // how to get highest value and decrement by 1 until listSum == 100 // then reassign back into the splitValues list? var highest = // ?? } else { // how to get lowest where value is > 0, and increment by 1 until listSum == 100 // then reassign back into the splitValues list? var lowest = // ?? } } update: the list has to remain in the same order as the items are added.

    Read the article

  • Do more specific css rules load better?

    - by bobobobo
    You can do this: .info { padding: 5px ; } Or, if you know it will be a div, you can do this div.info { padding: 5px ; } So, when there's a nested list.. you can do this.. div.info ul.navbar li.navitem a.sitelink { color: #f00; } Or you can do this a.sitelink { color: #f00; } Readability aside, which is better for the browser to parse/run?

    Read the article

  • which sql query is more efficient: select count(*) or select ... where key>value?

    - by davka
    I need to periodically update a local cache with new additions to some DB table. The table rows contain an auto-increment sequential number (SN) field. The cache keeps this number too, so basically I just need to fetch all rows with SN larger than the highest I already have. SELECT * FROM table where SN > <max_cached_SN> However, the majority of the attempts will bring no data (I just need to make sure that I have an absolutely up-to-date local copy). So I wander if this will be more efficient: count = SELECT count(*) from table; if (count > <cache_size>) // fetch new rows as above I suppose that selecting by an indexed numeric field is quite efficient, so I wander whether using count has benefit. On the other hand, this test/update will be done quite frequently and by many clients, so there is a motivation to optimize it.

    Read the article

  • Is it faster to count down that it is to count up?

    - by Bob
    Our computer science teacher once said that for some reason it is more efficient to count down that count up. For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen) I mean that code like this : for (i=N; i>=0; i--) putchar('*'); is better than: for (i=0; i<N; i++) putchar('*'); Is it really true? and if so does anyone know why?

    Read the article

  • Which of these queries is more efficient?

    - by Brian
    Which of these queries are more efficient? select 1 as newAndClosed from sysibm.sysdummy1 where exists ( select 1 from items where new = 1 ) and not exists ( select 1 from status where open = 1 ) select 1 as newAndClosed from items where new = 1 and not exists ( select 1 from status where open = 1 )

    Read the article

  • MySQL: Return grouped fields where the group is not empty, efficiently

    - by Ryan Badour
    In one statement I'm trying to group rows of one table by joining to another table. I want to only get grouped rows where their grouped result is not empty. Ex. Items and Categories SELECT Category.id FROM Item, Category WHERE Category.id = Item.categoryId GROUP BY Category.id HAVING COUNT(Item.id) > 0 The above query gives me the results that I want but this is slow, since it has to count all the rows grouped by Category.id. What's a more effecient way? I was trying to do a Group By LIMIT to only retrieve one row per group. But my attempts failed horribly. Any idea how I can do this? Thanks

    Read the article

  • Does the use of debuggers have an effect on the efficiency of programmers? [closed]

    - by alain.janinm
    Possible Duplicate: Are debugging skills important to become a good programmer? I'm a young Java developer and I make a systematic use of the Netbeans debugger. In fact, I often develop my applications when I debug step by step in order to see immediately if my code works. I feel spending a lot of time programming this way because the use of debugger increase execution time and I often wait for my app to jump from a breakpoint to an other (so much that I've the time to ask this question). I never learned to use a debugger at school, but at work I've been told immediately to use this functionality. I started teaching myself to use it two years ago, and I've never been told any key tips about it. I'd like to know if there are some rules to follow in order to use the debugger efficiently. I'm also wondering if using the debugger is eventually a good practice? Or is it a loss of time and I've to stop now this bad habit?

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • haproxy: Is there a way to group acls for greater efficiency?

    - by user41356
    I have some logic in a frontend that routes to different backends based on both the host and the url. Logically it looks like this: if hdr(host) ends with 'a.domain.com': if url starts with '/dir1/': use backend domain.com/dir1/ elif url starts with '/dir2/': use backend domain.com/dir2/ # ... else if ladder repeats on different dirs elif hdr(host) ends with 'b.domain.com': # another else if ladder exactly the same as above # ... # ... else if ladder repeats like this on different domains Is there a way to group acls to avoid having to repeatedly check the domain acl? Obviously there needs to be a use backend statement for each possibility, but I don't want to have to check the domain over and over because it's very inefficient. In other words, I want to avoid this: use backend domain.com/url1/ if acl-domain.com and acl-url1 use backend domain.com/url2/ if acl-domain.com and acl-url2 use backend domain.com/url3/ if acl-domain.com and acl-url3 # tons more possibilities below because it has to keep checking acl-domain.com. This is particularly an issue because I have specific rules for subdomains such as a.domain.com and b.domain.com, but I want to fall back on the most common case of *.domain.com. That means every single rule that uses a specific subdomain must be checked prior to *.domain.com which makes it even more inefficient for the common case.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >