Search Results

Search found 3706 results on 149 pages for 'nano optimization'.

Page 70/149 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Fastest way to compare Objects of type DateTime

    - by radbyx
    I made this. Is this the fastest way to find lastest DateTime of my collection of DateTimes? I'm wondering if there is a method for what i'm doing inside the foreach, but even if there is, I can't see how it can be faster than what i all ready got. List<StateLog> stateLogs = db.StateLog.Where(p => p.ProductID == product.ProductID).ToList(); DateTime lastTimeStamp = DateTime.MinValue; foreach (var stateLog in stateLogs) { int result = DateTime.Compare(lastTimeStamp, stateLog.TimeStamp); if (result < 0) lastTimeStamp = stateLog.TimeStamp; // sæt fordi timestamp er senere }

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • Optimizing PHP code (trying to determine min/max/between case)

    - by Swizzh
    I know this code-bit does not conform very much to best coding practices, and was looking to improve it, any ideas? if ($query['date_min'] != _get_date_today()) $mode_min = true; if ($query['date_max'] != _get_date_today()) $mode_max = true; if ($mode_max && $mode_min) $mode = "between"; elseif ($mode_max && !$mode_min) $mode = "max"; elseif (!$mode_max && $mode_min) $mode = "min"; else return; if ($mode == "min" || $mode == "between") { $command_min = "A"; } if ($mode == "max" || $mode == "between") { $command_max = "B"; } if ($mode == "between") { $command = $command_min . " AND " . $command_max; } else { if ($mode == "min") $command = $command_min; if ($mode == "max") $command = $command_max; } echo $command;

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • Optimizing GDI+ drawing?

    - by user146780
    I'm using C++ and GDI+ I'm going to be making a vector drawing application and want to use GDI+ for the drawing. I'v created a simple test to get familiar with it: case WM_PAINT: GetCursorPos(&mouse); GetClientRect(hWnd,&rct); hdc = BeginPaint(hWnd, &ps); MemDC = CreateCompatibleDC(hdc); bmp = CreateCompatibleBitmap(hdc, 600, 600); SelectObject(MemDC,bmp); g = new Graphics(MemDC); for(int i = 0; i < 1; ++i) { SolidBrush sb(Color(255,255,255)); g->FillRectangle(&sb,rct.top,rct.left,rct.right,rct.bottom); } for(int i = 0; i < 250; ++i) { pts[0].X = 0; pts[0].Y = 0; pts[1].X = 10 + mouse.x * i; pts[1].Y = 0 + mouse.y * i; pts[2].X = 10 * i + mouse.x; pts[2].Y = 10 + mouse.y * i; pts[3].X = 0 + mouse.x; pts[3].Y = (rand() % 600) + mouse.y; Point p1, p2; p1.X = 0; p1.Y = 0; p2.X = 300; p2.Y = 300; g->FillPolygon(&b,pts,4); } BitBlt(hdc,0,0,900,900,MemDC,0,0,SRCCOPY); EndPaint(hWnd, &ps); DeleteObject(bmp); g->ReleaseHDC(MemDC); DeleteDC(MemDC); delete g; break; I'm wondering if I'm doing it right, or if I have areas killing the cpu. Because right now it takes ~ 1sec to render this and I want to be able to have it redraw itself very quickly. Thanks In a real situation would it be better just to figure out the portion of the screen to redraw and only redraw the elements withing bounds of this?

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Results from two queries at once in sqlite?

    - by SF.
    I'm currently trying to optimize the sluggish process of retrieving a page of log entries from the SQLite database. I noticed I almost always retrieve next entries along with count of available entries: SELECT time, level, type, text FROM Logs WHERE level IN (%s) ORDER BY time DESC, id DESC LIMIT LOG_REQ_LINES OFFSET %d* LOG_REQ_LINES ; together with total count of records that can match current query: SELECT count(*) FROM Logs WHERE level IN (%s); (for a display "page n of m") I wonder, if I could concatenate the two queries, and ask them both in one sqlite3_exec() simply concatenating the query string. How should my callback function look then? Can I distinguish between the different types of data by argc? What other optimizations would you suggest?

    Read the article

  • super-space-optimized code

    - by Will
    There are key self-contained algorithms - particularly cryptography-related such as AES, RSA, SHA1 etc - which you can find many implementations of for free on the internet. Some are written to be nice and portable clean C. Some are written to be fast - often with macros, and explicit unrolling. As far as I can tell, none are trying to be especially super-small - so I'm resigned to writing my own - explicitly AES128 decryption and SHA1 for ARM THUMB2. What patterns and tricks can I use to do so? Are there compilers/tools that can roll-up code?

    Read the article

  • Optimize jQuery code

    - by Dannemannen
    Greetings, Just built some stuff with jQuery, everything works perfect(!), but I would like it to be as optimzed as possible.. what small changes can I do to my code? $(document).ready(function() { // hide the indicator, we use it later $(".indicator").hide(); // start the animation of the progressbar $(".fill").animate({ width: "50px",}, 4000, function() { $(".indicator").effect("pulsate", { times:999 }, 2000);}); // notify-me ajax function $(".btn-submit").click(function() { // get the variable email and put it in a new variable var email = $("input#mail").val(); var dataString = 'mail='+email; $.ajax({ type: "POST", url: "/mail.php", data: dataString, dataType: "json", success: function(msg){ // JSON return, lets do some magic if(msg.status == "ok") { $("#response-box").fadeIn("slow").delay(2000).fadeOut("slow"); $("#fade").fadeIn("slow").delay(2000).fadeOut("slow"); $("#response-box .inner").html("<h1>Thank you.</h1>We'll keep in touch!"); $("#mail").val("e.g. [email protected]"); } else { $("#response-box").fadeIn("slow").delay(2000).fadeOut("slow"); $("#fade").fadeIn("slow").delay(2000).fadeOut("slow"); $("#response-box .inner").html("<h1>Oops.</h1>Please try again!"); } } }); //make sure the form doesn't post return false; }); });

    Read the article

  • Speeding up Math calculations in Java

    - by Simon
    I have a neural network written in Java which uses a sigmoid transfer function defined as follows: private static double sigmoid(double x) { return 1 / (1 + Math.exp(-x)); } and this is called many times during training and computation using the network. Is there any way of speeding this up? It's not that it's slow, it's just that it is used a lot, so a small optimisation here would be a big overall gain.

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • How to optimize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • JavaScript Optimisation

    - by Jayie
    I am using JavaScript to work out all the combinations of badminton doubles matches from a given list of players. Each player teams up with everyone else. EG. If I have the following players a, b, c & d. Their combinations can be: a & b V c & d a & c V b & d a & d V b & c I am using the code below, which I wrote to do the job, but it's a little inefficient. It loops through the PLAYERS array 4 times finding every single combination (including impossible ones). It then sorts the game out into alphabetical order and stores it in the GAMES array if it doesn't already exist. I can then use the first half of the GAMES array to list all game combinations. The trouble is if I have any more than 8 players it runs really slowly because the combination growth is exponential. Does anyone know a better way or algorithm I could use? The more I think about it the more my brain hurts! var PLAYERS = ["a", "b", "c", "d", "e", "f", "g"]; var GAMES = []; var p1, p2, p3, p4, i1, i2, i3, i4, entry, found, i; var pos = 0; var TEAM1 = []; var TEAM2 = []; // loop through players 4 times to get all combinations for (i1 = 0; i1 < PLAYERS.length; i1++) { p1 = PLAYERS[i1]; for (i2 = 0; i2 < PLAYERS.length; i2++) { p2 = PLAYERS[i2]; for (i3 = 0; i3 < PLAYERS.length; i3++) { p3 = PLAYERS[i3]; for (i4 = 0; i4 < PLAYERS.length; i4++) { p4 = PLAYERS[i4]; if ((p1 != p2 && p1 != p3 && p1 != p4) && (p2 != p1 && p2 != p3 && p2 != p4) && (p3 != p1 && p3 != p2 && p3 != p4) && (p4 != p1 && p4 != p2 && p4 != p3)) { // sort teams into alphabetical order (so we can compare them easily later) TEAM1[0] = p1; TEAM1[1] = p2; TEAM2[0] = p3; TEAM2[1] = p4; TEAM1.sort(); TEAM2.sort(); // work out the game and search the array to see if it already exists entry = TEAM1[0] + " & " + TEAM1[1] + " v " + TEAM2[0] + " & " + TEAM2[1]; found = false; for (i=0; i < GAMES.length; i++) { if (entry == GAMES[i]) found = true; } // if the game is unique then store it if (!found) { GAMES[pos] = entry; document.write((pos+1) + ": " + GAMES[pos] + "<br>"); pos++; } } } } } } Thanks in advance. Jason.

    Read the article

  • How to properly cast a global memory array using the uint4 vector in CUDA to increase memory throughput?

    - by charis
    There are generally two techniques to increase the memory throughput of the global memory on a CUDA kernel; memory accesses coalescence and accessing words of at least 4 bytes. With the first technique accesses to the same memory segment by threads of the same half-warp are coalesced to fewer transactions while be accessing words of at least 4 bytes this memory segment is effectively increased from 32 bytes to 128. To access 16-byte instead of 1-byte words when there are unsigned chars stored in the global memory, the uint4 vector is commonly used by casting the memory array to uint4: uint4 *text4 = ( uint4 * ) d_text; var = text4[i]; In order to extract the 16 chars from var, i am currently using bitwise operations. For example: s_array[j * 16 + 0] = var.x & 0x000000FF; s_array[j * 16 + 1] = (var.x >> 8) & 0x000000FF; s_array[j * 16 + 2] = (var.x >> 16) & 0x000000FF; s_array[j * 16 + 3] = (var.x >> 24) & 0x000000FF; My question is, is it possible to recast var (or for that matter *text4) to unsigned char in order to avoid the additional overhead of the bitwise operations?

    Read the article

  • Does a c/c++ compiler optimize constant divisions by power-of-two value into shifts?

    - by porgarmingduod
    Question says it all. Does anyone know if the following... size_t div(size_t value) { const size_t x = 64; return value / x; } ...is optimized into? size_t div(size_t value) { return value >> 6; } Do compilers do this? (My interest lies in GCC). Are there situations where it does and others where it doesn't? I would really like to know, because every time I write a division that could be optimized like this I spend some mental energy wondering about whether precious nothings of a second is wasted doing a division where a shift would suffice.

    Read the article

  • C#, Using Custom Generic Collection faster with objects than List

    - by Kaminari
    Hello, I'm using for now List< to iterate through some object collection and find matching element, The problem is that object has only 2 significant values Name and Link (strings) but has some other values wich I dont want to compare. I'm thinkig about using something like HashSet (wich is exactly what I'm searching for - fast) from .NET 3.5 but target framework has to be 2.0. There is something called Power Collections here: http://powercollections.codeplex.com/ But maybe there is other way? If not, can you suggest me a suitable custom collection?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >