Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 122/1086 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • [Python] Tips for making a fraction calculator code more optimized (faster and using less memory)

    - by Logic Named Joe
    Hello Everyone, Basicly, what I need for the program to do is to act a as simple fraction calculator (for addition, subtraction, multiplication and division) for the a single line of input, for example: -input: 1/7 + 3/5 -output: 26/35 My initial code: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': licz = a * d + b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '-': licz= a * d - b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '*': licz= a * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) else: licz= a * d mian= b * c nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) Which I reduced to: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': print("/".join([str((a * d + b * c)/euclid(a * d + b * c, b * d)),str((b * d)/euclid(a * d + b * c, b * d))])) elif wyjscie[1] == '-': print("/".join([str((a * d - b * c)/euclid(a * d - b * c, b * d)),str((b * d)/euclid(a * d - b * c, b * d))])) elif wyjscie[1] == '*': print("/".join([str((a * c)/euclid(a * c, b * d)),str((b * d)/euclid(a * c, b * d))])) else: print("/".join([str((a * d)/euclid(a * d, b * c)),str((b * c)/euclid(a * d, b * c))])) Any advice on how to improve this futher is welcome. Edit: one more thing that I forgot to mention - the code can not make use of any libraries apart from sys.

    Read the article

  • How to speed-up a simple method (preferably without changing interfaces or data structures)?

    - by baol
    I have some data structures: all_unordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::tr1::unordered_map<size_t, size_t> position_m; const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • C++ Reserve Memory Space

    - by uray
    is there any way to reserve memory space to be used later by default Windows Memory Manager so that my application won't run out of memory if my program don't use space more than I have reserved at start of my program?

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Iterative Reduction to Null Matrix

    - by user1459032
    Here's the problem: I'm given a matrix like Input: 1 1 1 1 1 1 1 1 1 At each step, I need to find a "second" matrix of 1's and 0's with no two 1's on the same row or column. Then, I'll subtract the second matrix from the original matrix. I will repeat the process until I get a matrix with all 0's. Furthermore, I need to take the least possible number of steps. I need to print all the "second" matrices in O(n) time. In the above example I can get to the null matrix in 3 steps by subtracting these three matrices in order: Expected output: 1 0 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 0 I have coded an attempt, in which I am finding the first maximum value and creating the second matrices based on the index of that value. But for the above input I am getting 4 output matrices, which is wrong: My output: 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 My solution works for most of the test cases but fails for the one given above. Can someone give me some pointers on how to proceed, or find an algorithm that guarantees optimality? Test case that works: Input: 0 2 1 0 0 0 3 0 0 Output 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • how to speed up code??

    - by kaushik
    i want to speed my code compilation..I have searched the internet and heard that psyco is a very tool to improve the speed.i have searched but could get a site for download. i have installed any additional libraries or modules till date in my python.. can psyco user,tell where we can download the psyco and its installation and using procedures?? i use windows vista and python 2.6 does this work on this ??

    Read the article

  • Optimizing PHP code (trying to determine min/max/between case)

    - by Swizzh
    I know this code-bit does not conform very much to best coding practices, and was looking to improve it, any ideas? if ($query['date_min'] != _get_date_today()) $mode_min = true; if ($query['date_max'] != _get_date_today()) $mode_max = true; if ($mode_max && $mode_min) $mode = "between"; elseif ($mode_max && !$mode_min) $mode = "max"; elseif (!$mode_max && $mode_min) $mode = "min"; else return; if ($mode == "min" || $mode == "between") { $command_min = "A"; } if ($mode == "max" || $mode == "between") { $command_max = "B"; } if ($mode == "between") { $command = $command_min . " AND " . $command_max; } else { if ($mode == "min") $command = $command_min; if ($mode == "max") $command = $command_max; } echo $command;

    Read the article

  • Efficient SQL to count an occurrence in the latest X rows

    - by pulegium
    For example I have: create table a (i int); Assume there are 10k rows. I want to count 0's in the last 20 rows. Something like: select count(*) from (select i from a limit 20) where i = 0; Is that possible to make it more efficient? Like a single SQL statement or something? PS. DB is SQLite3 if that matters at all...

    Read the article

  • is there any faster way to parse than by walk each byte?

    - by uray
    is there any faster way to parse a text than by walk each byte of the text? I wonder if there is any special CPU (x86/x64) instruction for string operation that is used by string library, that somehow used to optimize the parsing routine. for example instruction like finding a token in a string that could be run by hardware instead of looping each byte until a token is found.

    Read the article

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • super-space-optimized code

    - by Will
    There are key self-contained algorithms - particularly cryptography-related such as AES, RSA, SHA1 etc - which you can find many implementations of for free on the internet. Some are written to be nice and portable clean C. Some are written to be fast - often with macros, and explicit unrolling. As far as I can tell, none are trying to be especially super-small - so I'm resigned to writing my own - explicitly AES128 decryption and SHA1 for ARM THUMB2. What patterns and tricks can I use to do so? Are there compilers/tools that can roll-up code?

    Read the article

  • How to speed-up a simple method? (possibily without changing interfaces or data structures)

    - by baol
    Hello. I have some data structures: all_unordered_mordered_m is a big vector containing all the strings I need (all different) ordered_m is a small vector containing the indexes of a subset of the strings (all different) in the former vector position_m maps the indexes of objects from the first vector to their position in the second one. The string_after(index, reverse) method returns the string referenced by ordered_m after all_unordered_m[index]. ordered_m is considered circular, and is explored in natural or reverse order depending on the second parameter. The code is something like the following: struct ordered_subset { // [...] std::vector<std::string>& all_unordered_m; // size = n >> 1 std::vector<size_t> ordered_m; // size << n std::map<size_t, size_t> position_m; // positions of strings in ordered_m const std::string& string_after(size_t index, bool reverse) const { size_t pos = position_m.find(index)->second; if(reverse) pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1); else pos = (pos == ordered.size() - 1 ? 0 : pos + 1); return all_unordered_m[ordered_m[pos]]; } }; Given that: I do need all of the data-structures for other purposes; I cannot change them because I need to access the strings: by their id in the all_unordered_m; by their index inside the various ordered_m; I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector; I cannot change the string_after interface without changing most of the program. How can I speed up the string_after method that is called billions of times and is eating up about 10% of the execution time?

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

  • Preventing objects from being linked if they are not needed?

    - by Massif
    I have an ARM project that I'm building with make. I'm creating the list of object files to link based on the names of all of the .c and .cpp files in my source directory. However, I would like to exclude objects from being linked if they are never used. Will the linker exclude these objects from the .elf file automatically even if I include them in the list of objects to link? If not, is there a way to generate a list of only the objects that need to be linked?

    Read the article

  • Who owes who money optimisation problem

    - by Francis
    Say you have n people, each who owe each other money. In general it should be possible to reduce the amount of transactions that need to take place. i.e. if X owes Y £4 and Y owes X £8, then Y only needs to pay X £4 (1 transaction instead of 2). This becomes harder when X owes Y, but Y owes Z who owes X as well. I can see that you can easily calculate one particular cycle. It helps for me when I think of it as a fully connected graph, with the nodes being the amount each person owes. Problem seems to be NP-complete, but what kind of optimisation algorithm could I make, nevertheless, to reduce the total amount of transactions? Doesn't have to be that efficient, as N is quite small for me.

    Read the article

  • Optimize jQuery code

    - by Dannemannen
    Greetings, Just built some stuff with jQuery, everything works perfect(!), but I would like it to be as optimzed as possible.. what small changes can I do to my code? $(document).ready(function() { // hide the indicator, we use it later $(".indicator").hide(); // start the animation of the progressbar $(".fill").animate({ width: "50px",}, 4000, function() { $(".indicator").effect("pulsate", { times:999 }, 2000);}); // notify-me ajax function $(".btn-submit").click(function() { // get the variable email and put it in a new variable var email = $("input#mail").val(); var dataString = 'mail='+email; $.ajax({ type: "POST", url: "/mail.php", data: dataString, dataType: "json", success: function(msg){ // JSON return, lets do some magic if(msg.status == "ok") { $("#response-box").fadeIn("slow").delay(2000).fadeOut("slow"); $("#fade").fadeIn("slow").delay(2000).fadeOut("slow"); $("#response-box .inner").html("<h1>Thank you.</h1>We'll keep in touch!"); $("#mail").val("e.g. [email protected]"); } else { $("#response-box").fadeIn("slow").delay(2000).fadeOut("slow"); $("#fade").fadeIn("slow").delay(2000).fadeOut("slow"); $("#response-box .inner").html("<h1>Oops.</h1>Please try again!"); } } }); //make sure the form doesn't post return false; }); });

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >