Search Results

Search found 4136 results on 166 pages for 'micro optimization'.

Page 61/166 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Searching with Linq

    - by Phil
    I have a collection of objects, each with an int Frame property. Given an int, I want to find the object in the collection that has the closest Frame. Here is what I'm doing so far: public static void Search(int frameNumber) { var differences = (from rec in _records select new { FrameDiff = Math.Abs(rec.Frame - frameNumber), Record = rec }).OrderBy(x => x.FrameDiff); var closestRecord = differences.FirstOrDefault().Record; //continue work... } This is great and everything, except there are 200,000 items in my collection and I call this method very frequently. Is there a relatively easy, more efficient way to do this?

    Read the article

  • Jruby rspec to be run parallely

    - by Priyank
    Hi. Is there something like Spork for Jruby too? We want to parallelize our specs to run faster and pre-load the classes while running the rake task; however we have not been able to do so. Since our project is considerable in size, specs take about 15 minutes to complete and this poses a serious challenge to quick turnaround. Any ideas are more than welcome. Cheers

    Read the article

  • Does the .NET CLR Really Optimize for the Current Processor

    - by dewald
    When I read about the performance of JITted languages like C# or Java, authors usually say that they should/could theoretically outperform many native-compiled applications. The theory being that native applications are usually just compiled for a processor family (like x86), so the compiler cannot make certain optimizations as they may not truly be optimizations on all processors. On the other hand, the CLR can make processor-specific optimizations during the JIT process. Does anyone know if Microsoft's (or Mono's) CLR actually performs processor-specific optimizations during the JIT process? If so, what kind of optimizations?

    Read the article

  • Memory efficient int-int dict in Python

    - by Bolo
    Hi, I need a memory efficient int-int dict in Python that would support the following operations in O(log n) time: d[k] = v # replace if present v = d[k] # None or a negative number if not present I need to hold ~250M pairs, so it really has to be tight. Do you happen to know a suitable implementation (Python 2.7)? EDIT Removed impossible requirement and other nonsense. Thanks, Craig and Kylotan! To rephrase. Here's a trivial int-int dictionary with 1M pairs: >>> import random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> d = {} >>> for _ in xrange(1000000): ... d[random.randint(0, sys.maxint)] = random.randint(0, sys.maxint) ... >>> h.heap() Partition of a set of 1999530 objects. Total size = 49161112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 0 25165960 51 25165960 51 dict (no owner) 1 1999521 100 23994252 49 49160212 100 int On average, a pair of integers uses 49 bytes. Here's an array of 2M integers: >>> import array, random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> a = array.array('i') >>> for _ in xrange(2000000): ... a.append(random.randint(0, sys.maxint)) ... >>> h.heap() Partition of a set of 14 objects. Total size = 8001108 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 7 8000028 100 8000028 100 array.array On average, a pair of integers uses 8 bytes. I accept that 8 bytes/pair in a dictionary is rather hard to achieve in general. Rephrased question: is there a memory-efficient implementation of int-int dictionary that uses considerably less than 49 bytes/pair?

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • Algorithm for optimally choosing actions to perform a task

    - by Jules
    There are two data types: tasks and actions. An action costs a certain time to complete, and a set of tasks this actions consists of. A task has a set of actions, and our job is to choose one of them. So: class Task { Set<Action> choices; } class Action { float time; Set<Task> dependencies; } For example the primary task could be "Get a house". The possible actions for this task: "Buy a house" or "Build a house". The action "Build a house" costs 10 hours and has the dependencies "Get bricks" and "Get cement", etcetera. The total time is the sum of all the times of the actions required to perform. We want to choose actions such that the total time is minimal. Note that the dependencies can be diamond shaped. For example "Get bricks" could require "Get a car" (to transport the bricks) and "Get cement" would also require a car. Even if you do "Get bricks" and "Get cement" you only have to count the time it takes to get a car once. Note also that the dependencies can be circular. For example "Money" - "Job" - "Car" - "Money". This is no problem for us, we simply select all of "Money", "Job" and "Car". The total time is simply the sum of the time of these 3 things. Mathematical description: Let actions be the chosen actions. valid(task) = ?action ? task.choices. (action ? actions ? ?tasks ? action.dependencies. valid(task)) time = sum {action.time | action ? actions} minimize time subject to valid(primaryTask)

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • How to optimise MySQL query containing a subquery?

    - by aidan
    I have two tables, House and Person. For any row in House, there can be 0, 1 or many corresponding rows in Person. But, of those people, a maximum of one will have a status of "ACTIVE", the others will all have a status of "CANCELLED". e.g. SELECT * FROM House LEFT JOIN Person ON House.ID = Person.HouseID House.ID | Person.ID | Person.Status 1 | 1 | CANCELLED 1 | 2 | CANCELLED 1 | 3 | ACTIVE 2 | 1 | ACTIVE 3 | NULL | NULL 4 | 4 | CANCELLED I want to filter out the cancelled rows, and get something like this: House.ID | Person.ID | Person.Status 1 | 3 | ACTIVE 2 | 1 | ACTIVE 3 | NULL | NULL 4 | NULL | NULL I've achieved this with the following sub select: SELECT * FROM House LEFT JOIN ( SELECT * FROM Person WHERE Person.Status != "CANCELLED" ) Person ON House.ID = Person.HouseID ...which works, but breaks all the indexes. Is there a better solution that doesn't? I'm using MySQL and all relevant columns are indexed. EXPLAIN lists nothing in possible_keys. Thanks.

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Fastest way to put contents of Set<String> to a single String with words separated by a whitespace?

    - by Lars Andren
    I have a few Set<String>s and want to transform each of these into a single String where each element of the original Set is separated by a whitespace " ". A naive first approach is doing it like this Set<String> set_1; Set<String> set_2; StringBuilder builder = new StringBuilder(); for (String str : set_1) { builder.append(str).append(" "); } this.string_1 = builder.toString(); builder = new StringBuilder(); for (String str : set_2) { builder.append(str).append(" "); } this.string_2 = builder.toString(); Can anyone think of a faster, prettier or more efficient way to do this?

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • Can my loop be optimized any more? (C++)

    - by Sagekilla
    Below is one of my inner loops that's run several thousand times, with input sizes of 20 - 1000 or more. Is there anything I can do to help squeeze any more performance out of this? I'm not looking to move this code to something like using tree codes (Barnes-Hut), but towards optimizing the actual calculations happening inside, since the same calculations occur in the Barnes-Hut algorithm. Any help is appreciated! typedef double real; struct Particle { Vector pos, vel, acc, jerk; Vector oldPos, oldVel, oldAcc, oldJerk; real mass; }; class Vector { private: real vec[3]; public: // Operators defined here }; real Gravity::interact(Particle *p, size_t numParticles) { PROFILE_FUNC(); real tau_q = 1e300; for (size_t i = 0; i < numParticles; i++) { p[i].jerk = 0; p[i].acc = 0; } for (size_t i = 0; i < numParticles; i++) { for (size_t j = i+1; j < numParticles; j++) { Vector r = p[j].pos - p[i].pos; Vector v = p[j].vel - p[i].vel; real r2 = lengthsq(r); real v2 = lengthsq(v); // Calculate inverse of |r|^3 real r3i = Constants::G * pow(r2, -1.5); // da = r / |r|^3 // dj = (v / |r|^3 - 3 * (r . v) * r / |r|^5 Vector da = r * r3i; Vector dj = (v - r * (3 * dot(r, v) / r2)) * r3i; // Calculate new acceleration and jerk p[i].acc += da * p[j].mass; p[i].jerk += dj * p[j].mass; p[j].acc -= da * p[i].mass; p[j].jerk -= dj * p[i].mass; // Collision estimation // Metric 1) tau = |r|^2 / |a(j) - a(i)| // Metric 2) tau = |r|^4 / |v|^4 real mij = p[i].mass + p[j].mass; real tau_est_q1 = r2 / (lengthsq(da) * mij * mij); real tau_est_q2 = (r2*r2) / (v2*v2); if (tau_est_q1 < tau_q) tau_q = tau_est_q1; if (tau_est_q2 < tau_q) tau_q = tau_est_q2; } } return sqrt(sqrt(tau_q)); }

    Read the article

  • Question about Cost in Oracle Explain Plan

    - by Will
    When Oracle is estimating the 'Cost' for certain queries, does it actually look at the amount of data (rows) in a table? For example: If I'm doing a full table scan of employees for name='Bob', does it estimate the cost by counting the amount of existing rows, or is it always a set cost?

    Read the article

  • Combine query results from one table with the defaults from another

    - by pulegium
    This is a dumbed down version of the real table data, so may look bit silly. Table 1 (users): id INT username TEXT favourite_food TEXT food_pref_id INT Table 2 (food_preferences): id INT food_type TEXT The logic is as follows: Let's say I have this in my food preference table: 1, 'VEGETARIAN' and this in the users table: 1, 'John', NULL, 1 2, 'Pete', 'Curry', 1 In which case John defaults to be a vegetarian, but Pete should show up as a person who enjoys curry. Question, is there any way to combine the query into one select statement, so that it would get the default from the preferences table if the favourite_food column is NULL? I can obviously do this in application logic, but would be nice just to offload this to SQL, if possible. DB is SQLite3...

    Read the article

  • Design for a Debate club assignment application

    - by Amir Rachum
    Hi all, For my university's debate club, I was asked to create an application to assign debate sessions and I'm having some difficulties as to come up with a good design for it. I will do it in Java. Here's what's needed: What you need to know about BP debates: There are four teams of 2 debaters each and a judge. The four groups are assigned a specific position: gov1, gov2, op1, op2. There is no significance to the order within a team. The goal of the application is to get as input the debaters who are present (for example, if there are 20 people, we will hold 2 debates) and assign them to teams and roles with regards to the history of each debater so that: Each debater should debate with (be on the same team) as many people as possible. Each debater should uniformly debate in different positions. The debate should be fair - debaters have different levels of experience and this should be as even as possible - i.e., there shouldn't be a team of two very experienced debaters and a team of junior debaters. There should be an option for the user to restrict the assignment in various ways, such as: Specifying that two people should debate together, in a specific position or not. Specifying that a single debater should be in a specific position, regardless of the partner. etc... If anyone can try to give me some pointers for a design for this application, I'll be so thankful! Also, I've never implemented a GUI before, so I'd appreciate some pointers on that as well, but it's not the major issue right now.

    Read the article

  • ASP.NET 4.5 Bundling in Debug Mode - Stale Resources

    - by RPM1984
    Is there any way we can make the ASP.NET 4.5 Bundling functionality generate GUID's as part of the querystring when running in debug mode (e.g bundling turned OFF). The problem is when developing locally, the scripts/CSS files are generated like this: <script type="text/javascript" src="/Content/Scripts/myscript.js" /> So if i change that file, i need to do a hard-refresh (sometimes a few times) to get the file to be picked up by the browser - annoying. Is there any way we can make it render out like this: <script type="text/javascript" src="/Content/Scripts/myscript.js?v=x" /> Where x is a GUID (e.g always unique). Ideas? I'm on ASP.NET MVC 4.

    Read the article

  • When to use certain optimizations such as -fwhole-program and -fprofile-generate with several shared libraries

    - by James
    Probably a simple answer; I get quite confused with the language used in the GCC documentation for some of these flags! Anyway, I have three libraries and a programme which uses all these three. I compile each of my libraries seperately with individual (potentially) different sets of warning flags. However, I compile all three libraries with the same set of optimisation flags. I then compile my main programme linking in these three libraries with its own set of warning flags and the same optimisation flags used during the libraries' compilation. 1) Do I have to compile the libraries with optimisation flags present or can I just use these flags when compiling the final programme and linking to the libraries? If the latter, will it then optimise all or just some (presumably that which is called) of the code in these libraries? 2) I would like to use -fwhole-program -flto -fuse-linker-plugin and the linker plugin gold. At which stage do I compile with these on ... just the final compilation or do these flags need to be present during the compilation of the libraries? 3) Pretty much the same as 2) however with, -fprofile-generate -fprofile-arcs and -fprofile-use. I understand one first runs a programme with generate, and then with use. However, do I have to compile each of the libraries with generate/use etc. or just the final programme? And if it is just the last programme, when I then compeil with -fprofile-use will it also optimise the libraries functionality? Many thanks, James

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • How efficient is PHP's substr?

    - by zildjohn01
    I'm writing a parser in PHP which must be able to handle large in-memory strings, so this is a somewhat important issue. (ie, please don't "premature optimize" flame me, please) How does the substr function work? Does it make a second copy of the string data in memory, or does it reference the original? Should I worry about calling, for example, $str = substr($str, 1); in a loop?

    Read the article

  • How to recognize what indexes are not used?

    - by tomaszs
    I have a table in MySQL with 7 indexes, most of them are on more than one column. I think here is too much indexes. Is there any way to get statistics of what indexes are used more by all thousands of queries to this database and what are less worthy so I know what index to consider to remove in first place?

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

  • Javascript + Firebug : "cannot access optimized closure" What does it mean?

    - by interstar
    I just got the following error in a piece of javascript (in Firefox 3.5, with Firebug running) cannot access optimized closure I know, superficially, what caused the error. I had a line options.length() instead of options.length Fixing this bug, made the message go away. But I'm curious. What does this mean? What is an optimized closure? Is optimizing an enclosure something that the javascript interpretter does automatically? What does it do?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >