Search Results

Search found 4136 results on 166 pages for 'micro optimization'.

Page 73/166 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • Merging and splitting overlapping rectangles to produce non-overlapping ones

    - by uj
    I am looking for an algorithm as follows: Given a set of possibly overlapping rectangles (All of which are "not rotated", can be uniformly represented as (left,top,right,bottom) tuplets, etc...), it returns a minimal set of (non-rotated) non-overlapping rectangles, that occupy the same area. It seems simple enough at first glance, but prooves to be tricky (at least to be done efficiently). Are there some known methods for this/ideas/pointers? Methods for not necessarily minimal, but heuristicly small, sets, are interesting as well, so are methods that produce any valid output set at all.

    Read the article

  • Efficiently draw a grid in Windows Forms

    - by Joel
    I'm writing an implementation of Conway's Game of Life in C#. This is the code I'm using to draw the grid, it's in my panel_Paint event. g is the graphics context. for (int y = 0; y < numOfCells * cellSize; y += cellSize) { for (int x = 0; x < numOfCells * cellSize; x += cellSize) { g.DrawLine(p, x, 0, x, y + numOfCells * cellSize); g.DrawLine(p, 0, x, y + size * drawnGrid, x); } } When I run my program, it is unresponsive until it finishes drawing the grid, which takes a few seconds at numOfCells = 100 & cellSize = 10. Removing all the multiplication makes it faster, but not by very much. Is there a better/more efficient way to draw my grid? Thanks

    Read the article

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • Does the compiler optimize the function parameters passed by value?

    - by Naveen
    Lets say I have a function where the parameter is passed by value instead of const-reference. Further, lets assume that only the value is used inside the function i.e. the function doesn't try to modify it. In that case will the compiler will be able to figure out that it can pass the value by const-reference (for performance reasons) and generate the code accordingly? Is there any compiler which does that?

    Read the article

  • How to get REALLY fast python over a simple loop

    - by totallymike
    I'm working on a spoj problem, INTEST. The goal is to specify the number of test cases (n) and a divisor (k), then feed your program n numbers. The program will accept each number on a newline of stdin and after receiving the nth number, will tell you how many were divisible by k. The only challenge in this problem is getting your code to be FAST because it k can be anything up to 10^7 and the test cases can be as high as 10^9. I'm trying to write it in python and having trouble speeding it up. Any ideas? import sys first_in = raw_input() thing = first_in.split() n = int(thing[0]) k = int(thing[1]) total = 0 i = 0 for line in sys.stdin: t = int(line) if t % k == 0: total += 1 print total

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Optimizing PHP code (trying to determine min/max/between case)

    - by Swizzh
    I know this code-bit does not conform very much to best coding practices, and was looking to improve it, any ideas? if ($query['date_min'] != _get_date_today()) $mode_min = true; if ($query['date_max'] != _get_date_today()) $mode_max = true; if ($mode_max && $mode_min) $mode = "between"; elseif ($mode_max && !$mode_min) $mode = "max"; elseif (!$mode_max && $mode_min) $mode = "min"; else return; if ($mode == "min" || $mode == "between") { $command_min = "A"; } if ($mode == "max" || $mode == "between") { $command_max = "B"; } if ($mode == "between") { $command = $command_min . " AND " . $command_max; } else { if ($mode == "min") $command = $command_min; if ($mode == "max") $command = $command_max; } echo $command;

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • Optimizing JS Array Search

    - by The.Anti.9
    I am working on a Browser-based media player which is written almost entirely in HTML 5 and JavaScript. The backend is written in PHP but it has one function which is to fill the playlist on the initial load. And the rest is all JS. There is a search bar that refines the playlist. I want it to refine as the person is typing, like most media players do. The only problem with this is that it is very slow and laggy as there are about 1000 songs in the whole program and there is likely to be more as time goes on. The original playlist load is an ajax call to a PHP page that returns the results as JSON. Each item has 4 attirbutes: artist album file url I then loop through each object and add it to an array called playlist. At the end of the looping a copy of playlist is created, backup. This is so that I can refine the playlist variable when people refine their search, but still repopulated it from backup without making another server request. The method refine() is called when the user types a key into the searchbox. It flushes playlist and searches through each property (not including url) of each object in the backup array for a match in the string. If there is a match in any of the properties, it appends the information to a table that displays the playlist, and adds it to the object to playlist for access by the actual player. Code for the refine() method: function refine() { $('#loadinggif').show(); $('#library').html("<table id='libtable'><tr><th>Artist</th><th>Album</th><th>File</th><th>&nbsp;</th></tr></table>"); playlist = []; for (var j = 0; j < backup.length; j++) { var sfile = new String(backup[j].file); var salbum = new String(backup[j].album); var sartist = new String(backup[j].artist); if (sfile.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || salbum.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || sartist.toLowerCase().search($('#search').val().toLowerCase()) !== -1) { playlist.push(backup[j]); num = playlist.length-1; $("<tr></tr>").html("<td>" + num + "</td><td>" + sartist + "</td><td>" + salbum + "</td><td>" + sfile + "</td><td><a href='#' onclick='setplay(" + num +");'>Play</a></td>").appendTo('#libtable'); } } $('#loadinggif').hide(); } As I said before, for the first couple of letters typed, this is very slow and laggy. I am looking for ways to refine this to make it much faster and more smooth.

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Fastest way to compare Objects of type DateTime

    - by radbyx
    I made this. Is this the fastest way to find lastest DateTime of my collection of DateTimes? I'm wondering if there is a method for what i'm doing inside the foreach, but even if there is, I can't see how it can be faster than what i all ready got. List<StateLog> stateLogs = db.StateLog.Where(p => p.ProductID == product.ProductID).ToList(); DateTime lastTimeStamp = DateTime.MinValue; foreach (var stateLog in stateLogs) { int result = DateTime.Compare(lastTimeStamp, stateLog.TimeStamp); if (result < 0) lastTimeStamp = stateLog.TimeStamp; // sæt fordi timestamp er senere }

    Read the article

  • Preventing objects from being linked if they are not needed?

    - by Massif
    I have an ARM project that I'm building with make. I'm creating the list of object files to link based on the names of all of the .c and .cpp files in my source directory. However, I would like to exclude objects from being linked if they are never used. Will the linker exclude these objects from the .elf file automatically even if I include them in the list of objects to link? If not, is there a way to generate a list of only the objects that need to be linked?

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • Java - Optimize finding a string in a list

    - by Mark
    I have an ArrayList of objects where each object contains a string 'word' and a date. I need to check to see if the date has passed for a list of 500 words. The ArrayList could contain up to a million words and dates. The dates I store as integers, so the problem I have is attempting to find the word I am looking for in the ArrayList. Is there a way to make this faster? In python I have a dict and mWords['foo'] is a simple lookup without looping through the whole 1 million items in the mWords array. Is there something like this in java? for (int i = 0; i < mWords.size(); i++) { if ( word == mWords.get(i).word ) { mLastFindIndex = i; return mWords.get(i); } }

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Optimizing near-duplicate value search

    - by GApple
    I'm trying to find near duplicate values in a set of fields in order to allow an administrator to clean them up. There are two criteria that I am matching on One string is wholly contained within the other, and is at least 1/4 of its length The strings have an edit distance less than 5% of the total length of the two strings The Pseudo-PHP code: foreach($values as $value){ foreach($values as $match){ if( ( $value['length'] < $match['length'] && $value['length'] * 4 > $match['length'] && stripos($match['value'], $value['value']) !== false ) || ( $match['length'] < $value['length'] && $match['length'] * 4 > $value['length'] && stripos($value['value'], $match['value']) !== false ) || ( abs($value['length'] - $match['length']) * 20 < ($value['length'] + $match['length']) && 0 < ($match['changes'] = levenshtein($value['value'], $match['value'])) && $match['changes'] * 20 <= ($value['length'] + $match['length']) ) ){ $matches[] = &$match; } } } I've tried to reduce calls to the comparatively expensive stripos and levenshtein functions where possible, which has reduced the execution time quite a bit. However, as an O(n^2) operation this just doesn't scale to the larger sets of values and it seems that a significant amount of the processing time is spent simply iterating through the arrays. Some properties of a few sets of values being operated on Total | Strings | # of matches per string | | Strings | With Matches | Average | Median | Max | Time (s) | --------+--------------+---------+--------+------+----------+ 844 | 413 | 1.8 | 1 | 58 | 140 | 593 | 156 | 1.2 | 1 | 5 | 62 | 272 | 168 | 3.2 | 2 | 26 | 10 | 157 | 47 | 1.5 | 1 | 4 | 3.2 | 106 | 48 | 1.8 | 1 | 8 | 1.3 | 62 | 47 | 2.9 | 2 | 16 | 0.4 | Are there any other things I can do to reduce the time to check criteria, and more importantly are there any ways for me to reduce the number of criteria checks required (for example, by pre-processing the input values), since there is such low selectivity?

    Read the article

  • Optimizing Code

    - by Claudiu
    You are given a heap of code in your favorite language which combines to form a rather complicated application. It runs rather slowly, and your boss has asked you to optimize it. What are the steps you follow to most efficiently optimize the code? What strategies have you found to be unsuccessful when optimizing code? Re-writes: At what point do you decide to stop optimizing and say "This is as fast as it'll get without a complete re-write." In what cases would you advocate a simple complete re-write anyway? How would you go about designing it?

    Read the article

  • Optimize SELECT DISTINCT CONCAT query in MySQL

    - by L. Cosio
    Hello! I'm running this query: SELECT DISTINCT CONCAT(ALFA_CLAVE, FECHA_NACI) FROM listado GROUP BY ALFA_CLAVE HAVING count(CONCAT(ALFA_CLAVE, FECHA_NACI)) > 1 Is there any way to optimize it? Queries are taking 2-3 hours on a table with 850,000 rows. Adding an index to ALFA_CLAVE and FECHA_NACI would work? Thanks in advanced

    Read the article

  • Does a c/c++ compiler optimize constant divisions by power-of-two value into shifts?

    - by porgarmingduod
    Question says it all. Does anyone know if the following... size_t div(size_t value) { const size_t x = 64; return value / x; } ...is optimized into? size_t div(size_t value) { return value >> 6; } Do compilers do this? (My interest lies in GCC). Are there situations where it does and others where it doesn't? I would really like to know, because every time I write a division that could be optimized like this I spend some mental energy wondering about whether precious nothings of a second is wasted doing a division where a shift would suffice.

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >