Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 36/400 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Most efficient approach for multilingual PHP website

    - by alexteg
    I am working on a large multilingual website and I am considering different approaches for making it multilingual. The possible alternatives I can think of are: The Gettext functions with generation of .po files One MySQL table with the translations and a unique string ID for each text PHP-files with arrays containing the different translations with unique string IDs As far as I have understood the Gettext functions should be most efficient, but my requirement is that it should be possible to change a text string in the original reference language (English) without the other translations of that string automatically reverting back to English just because a couple of words changed. Is this possible with Gettext? What is the least resource demanding solution? Is using the Gettext functions or PHP files with arrays more or less equally resource demanding? Any other suggestions for more efficient solutions?

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • Color space - RGB and YCbCr question

    - by HardCoder1986
    Hello! I am now trying to understand how JPEG encoding works and everything seems fine except the color transformation part. Before attempting to do a DCT in JPEG algorithm, the image is transformed into YCbCr color space. To me this essentially means that we just (comparing to initial RGB image) take a chunk of color information and dispose it while applying the RGB -> YCbCr transformation. So, our encoding steps look generally like RGB -> YCbCr -> DCT -> Huffman. The decoding means inversing this process. And my question is - why does the image (for example, created and exported to JPEG) remain the same in terms of color, although we have to make inverse YCbCr -> RGB transform. Where does the disposed part of color information comes from or how is it handled?

    Read the article

  • The best way to do :not in jQuery?

    - by Smickie
    Hi, I have a menu in jQuery when you click on a link it opens up, but I want it so when you click somewhere else, anywhere else that is not the menu, it becomes hidden. At the moment I'm binding a click event to $(':not(#the_menu)') But this seems like I'm binding a click event to the entire minus the menu, is there a more efficient way of doing something like this?

    Read the article

  • Tab versus space indentation in C#

    - by Lars Fastrup
    I sometimes find myself discussing this issue with other C# developers and especially if we use different styles. I can see the advantage of tab indentation allowing different developers to browse the code with their favorite indent size. Nonetheless, I long ago went for two space indentation in my C# code and have stuck with it ever since. Mainly because I often disliked the way statements spanning multiple lines are sometimes messed up when viewing code from other developers using another tab size. Recently a developer at one of my clients approached me and asked why I did not use tabs because he preferred to view code with an indentation size of 4. So my question is: Which style do you prefer and why?

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • Problems crossing the boundary between protected greasemonkey execution space and the unsafeWindow l

    - by Chilly
    Hi guys. Here is my problem: I've registered some callbacks into a Yahoo event driven webpage (betfair.com market views) and am trapping the betsPlaced events with a handler. So far so simple. Next stage is to get the event back into greasemonkey land, and while I know that from greasemoney space you can call unsafeWindow.stuff, there is no reverse operation (by design). So if I want to send the contents of the event over, say, a cometd queue, my carefully set up jquery, greasemonkey, YUI2, betfair environment fails by telling me that unsafeWindow processes cant call GM_ajax stuff. This is obviously safe and sane, but it basically stops me doing what I want to do. Has anyone tried doing this (ignore the cometd stuff, just general ajax calls) and succeeded? I've had a look at pages like this: http://wiki.greasespot.net/0.7.20080121.0%2B_compatibility but it doesnt appear to work for all the calls.

    Read the article

  • weird space in IE - Any suggestions?

    - by Guru
    The below table is the only element inside a body tag - This displays fine in Firefox 3 as i expect it to be but it does not look good in IE7. There is a weird space just between the nested table and the row above. Can you please suggest some way to remove that weird space? - Thanks <table> <tr> <td colspan="14"> <div> <table id="value_table" width="100%" border="0" cellspacing="0" cellpadding="2" style="border-collapse: collapse; display: block"> <tr> <td height="20" align="center" valign="Middle" class="Header"> <div align="left"><b>  Search Relationships</b></div> </td> </tr> <tr> <td>This is working</td> </tr> <tr valign="top"> <td>second row</td> <td nowrap="nowrap" class="GrayRow" valign="top" border="1" height="40" align="center" style="border: none"> just above the table <table border="1" cellpadding="0" cellspacing="0" align="left"> <tr valign="top"> <td>new row inside table</td> <td class="GrayRow" nowrap="nowrap"> <b>Select:</b>   <select id="j_id19:browseType" name="j_id19:browseType" size="1" class="TextBlackNormal" onchange="showDynamicBox(this);"> <option value="NAME">User Name</option> <option value="ID">User Id</option> <option value="IBD/Office/IP">IBD/Office /IP</option> <option value="APA#">APA#</option></select>    </td> <td> <div id="dynamicBox1" style="display: block"><input id="j_id19:j_id23" name="j_id19:j_id23" type="text" value="" size="32" class="TextBlackNormal" /></div> </td>     <td> <div id="dynamicBox2" style="display: none"><input id="j_id19:j_id25" name="j_id19:j_id25" type="text" value="" size="32" class="TextBlackNormal" /></div> </td>     <td> <div id="dynamicBox3" style="display: none"> IBD   <input id="j_id19:ibdval1" name="j_id19:ibdval1" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" />   OFF   <input id="j_id19:ibdval2" name="j_id19:ibdval2" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" />   IP    <input id="j_id19:ibdval3" name="j_id19:ibdval3" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" /> </div> </td>     <td> <div id="dynamicBox4" style="display: none"> Average Price Account#    <input id="j_id19:apaval1" name="j_id19:apaval1" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" />  <input id="j_id19:apaval2" name="j_id19:apaval2" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" />  <input id="j_id19:apaval3" name="j_id19:apaval3" type="text" value="" maxlength="3" size="3" onkeyup="goToNextFocus(this);" class="TextBlackNormal" /> </div> </td>        <td class="GrayRow" nowrap="nowrap"> <div id="msg_multiple_inputs" style="display:none"> <font color="#990000" size="1">Enter multiple separated by commas   </font> </div>                                   </td> <td><input id="j_id19:display" name="j_id19:display" type="submit" value="Display" class="TextBlackNormal" /></td> </tr> </table> </td> </tr> </table> </div> </td> </tr> </table>

    Read the article

  • C++: Efficiently adding integers to strings

    - by Shinka
    I know how to add integers to strings, but I'm not sure I'm doing it in an efficient matters. I have a class where I often have to return a string plus an integer (a different integer each time), in Java I would do something like public class MyClass { final static String S = "MYSTRING"; private int id = 0; public String getString() { return S + (id++); } } But in C++ I have to do; class MyClass { private: std::string S; // For some reason I can't do const std::string S = "MYSTRING"; int id; public: MyClass() { S = "MYSTRING"; id = 0; } std::string getString() { std::ostringstream oss; oss << S << id++; return oss.str(); } } An additional constraint: I don't want (in fact, in can't) use Boost or any other librairies, I'll have to work with the standard library. So the thing is; the code works, but in C++ I have to create a bunch of ostringstream objects, so it seems inefficient. To be fair, perhaps Java do the same and I just don't notice it, I say it's inefficient mostly because I know very little about strings. Is there a more efficient way to do this ?

    Read the article

  • [MySQL] Efficiently store last X records per item

    - by Saif Bechan
    I want to store the last X records in an MySQL database in an efficient way. So when the 4th record is stored the first should be deleted. The way I do this not is first run a query getting the items. Than check what I should do then insert/delete. There has to be a better way to do this. Any suggestions? Edit I think I should add that the records stored do not have a unique number. They have a mixed par. For example article_id and user_id. Then I want to make a table with the last X items for user_x. Just selecting the article from the table grouped by user and sorted by time is not an option for me. The table where I do the sort and group on has millions of records and gets hit a lot for no reason. So making a table in between with the last X records is way more effient. PS. I am not using this for articles and users.

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • Most efficient way for a lookup/search in a huge list (python)

    - by user229269
    Hey guys, -- I just parsed a big file and I created a list containing 42.000 strings/words. I want to query [against this list] to check if a given word/string belongs to it. So my question is: What is the most efficient way for such a lookup? A first approach is to sort the list [list.sort()] and then just use the if word in list: print 'word' -- which is really trivial and I am sure there is a better way to do it. My goal is to apply a fast lookup that finds whether a given string is in this list or not. If you have any ideas of another data structure, they are welcome. Yet, I want to avoid for now more sophisticated data-structures like Tries etc. I am interested in hearing ideas (or tricks) about fast lookups or any other python library methods that might do the search faster than the simple 'in'. Thanks in advance!

    Read the article

  • String Parsing in C#

    - by Betamoo
    What is the most efficient way to parse a C# string in the form of "(params (abc 1.3)(sdc 2.0)....)" into a struct in the form struct Params { double abc,sdc....; } Thanks EDIT The structure always have the same parameters (number and names).. but the order is not granted..

    Read the article

  • Is this postgres function cost efficient or still have to clean

    - by kiranking
    There are two tables in postgres db. english_all and english_glob First table contains words like international,confidential,booting,cooler ...etc I have written the function to get the words from english_all then perform for loop for each word to get word list which are not inserted in anglish_glob table. Word list is like I In Int Inte Inter .. b bo boo boot .. c co coo cool etc.. for some reason zwnj(zero-width non-joiner) is added during insertion to english_all table. But in function I am removing that character with regexp_replace. Postgres function for_loop_test is taking two parameter min and max based on that I am selecting words from english_all table. function code is like DECLARE inMinLength ALIAS FOR $1; inMaxLength ALIAS FOR $2; mviews RECORD; outenglishListRow english_word_list;--custom data type eng_id,english_text BEGIN FOR mviews IN SELECT id,english_all_text FROM english_all where wlength between inMinLength and inMaxLength ORDER BY english_all_text limit 30 LOOP FOR i IN 1..char_length(regexp_replace(mviews.english_all_text,'(?)$','')) LOOP FOR outenglishListRow IN SELECT distinct on (regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','')) mviews.id, regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') where regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') not in(select english_glob.english_text from english_glob where i=english_glob.wlength) order by regexp_replace((substring(mviews.english_all_text from 1 for i)),'(?)$','') LOOP RETURN NEXT outenglishListRow; END LOOP; END LOOP; END LOOP; END; Once I get the word list I will insert that into another table english_glob. My question is is there any thing I can add to or remove from function to make it more efficient. edit Let assume english_all table have words like footer,settle,question,overflow,database,kingdom If inMinLength = 5 and inmaxLength=7 then in the outer loop footer,settle,kingdom will be selected. For above 3 words inner two loop will apply to get words like f,fo,foo,foot,foote,footer,s,se,set,sett,settl .... etc. In the final process words which are bold will be entered into english_glob with another parameter like 1 to denote it is a proper word and stored in the another filed of english_glob table. Remaining word will be stored with another parameter 0 because in the next call words which are saved in database should not be fetched again. edit2: This is a complete code CREATE TABLE english_all ( id serial NOT NULL, english_all_text text NOT NULL, wlength integer NOT NULL, CONSTRAINT english_all PRIMARY KEY (id), CONSTRAINT english_all_kan_text_uq_id UNIQUE (english_all_text) ) CREATE TABLE english_glob ( id serial NOT NULL, english_text text NOT NULL, is_prop integer default 1, CONSTRAINT english_glob PRIMARY KEY (id), CONSTRAINT english_glob_kan_text_uq_id UNIQUE (english_text) ) insert into english_all(english_text) values ('ant'),('forget'),('forgive'); on function call with parameter 3 and 6 fallowing rows should fetched a an ant f fo for forg forge forget next is insert to another table based on above row insert into english_glob(english_text,is_prop) values ('a',1),('an',1), ('ant',1),('f',0), ('fo',0),('for',1), ('forg',0),('forge',1), ('forget',1), on function call next time with parameter 3 and 7 fallowing rows should fetched.(because f,fo,for,forg are all entered in english_glob table) forgi forgiv forgive

    Read the article

  • MySQL More Tables or More Rows

    - by Pez Cuckow
    I am working on a game that I am going to open to the public to have on their game. The game stores lots of information (about 300 rows) per website and spends a lot of time updating values within this MySQL database. Is it better (faster/efficient) to add a new table for every website or to just have 1000's of rows in one table and add a column "website_id" or similar?

    Read the article

  • Writing shorter code/algorithms, is more efficient (performance)?

    - by Carlos
    After coming across the code golf trivia around the site it is obvious people try to find ways to write code and algorithms as short as the possibly can in terms of characters, lines and total size, even if that means writing something like: n=input() while n>1:n=(n/2,n*3+1)[n%2];print n So as a beginner I start to wonder whether size actually matters :D. It is obviously a very subjective question highly dependent on the actual code being used, but what is the rule of thumb in the real world. In the case that size wont matter, how come then we don't focus more on performance rather than size?

    Read the article

  • Random List of millions of elements in Python Efficiently

    - by eWizardII
    Hello, I have read this answer potentially as the best way to randomize a list of strings in Python. I'm just wondering then if that's the most efficient way to do it because I have a list of about 30 million elements via the following code: import json from sets import Set from random import shuffle a = [] for i in range(0,193): json_data = open("C:/Twitter/user/user_" + str(i) + ".json") data = json.load(json_data) for j in range(0,len(data)): a.append(data[j]['su']) new = list(Set(a)) print "Cleaned length is: " + str(len(new)) ## Take Cleaned List and Randomize it for Analysis shuffle(new) If there is a more efficient way to do it, I'd greatly appreciate any advice on how to do it. Thanks,

    Read the article

  • When to use Vanilla Javascript vs. jQuery?

    - by jondavidjohn
    I have noticed while monitoring/attempting to answer common jQuery questions, that there are certain practices using javascript, instead of jQuery, that actually enable you to write less and do ... well the same amount. And may also yield performance benefits. A specific example $(this) vs this Inside a click event referencing the clicked objects id jQuery $(this).attr("id"); Javascript this.id; Are there any other common practices like this? Where certain Javascript operations could be accomplished easier, without bringing jQuery into the mix. Or is this a rare case? (of a jQuery "shortcut" actually requiring more code) EDIT : While I appreciate the answers regarding jQuery vs. plain javascript performance, I am actually looking for much more quantitative answers. While using jQuery, instances where one would actually be better off (readability/compactness) to use plain javascript instead of using $(). In addition to the example I gave in my original question.

    Read the article

  • Most Efficient way to set Register to 1 or (-1)

    - by Bob
    I am taking an assembly course now, and the guy who checks our home assignments is a very pedantic old-school optimization freak. For example he deducts 10% if he sees: mov ax, 0 instead of: xor ax,ax even if it's only used once. I am not a complete beginner in assembly programing but I'm not an optimization expert, so I need your help in something (might be a very stupid question but I'll ask anyway): if I need to set a register value to 1 or (-1) is it better to use: mov ax, 1 or do something like: xor ax,ax inc ax I really need a good grade, so I'm trying to get it as optimized as possible. ( I need to optimize both time and code size)

    Read the article

  • Remove space in marquee in html

    - by Suman.hassan95
    I have created a marquee of images for my webpage. But how can the space between the last and the first image be removed to have a continuous effect ?? I am giving the code i used below, <marquee style="overflow:" behavior="scroll" direction="left" OnMouseOver="this.stop()" OnMouseOut="this.start()"> <img src="images/Bluelounge.gif" width="300" height="200" alt="lon"> <img src="images/Southleather.gif" width="300" height="200" alt="south"> <img src="images/Dell-monitor.gif" width="300" height="200" alt="monitor"> <img src="images/Spphire.gif" width="300" height="200" alt="card"> </marquee>

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Is converting this ArrayList to a Generic List efficient?

    - by Greg
    The code I'm writing receives an ArrayList from unmanaged code, and this ArrayList will always contain one or more objects of type Grid_Heading_Blk. I've considered changing this ArrayList to a generic List, but I'm unsure if the conversion operation will be so expensive as to nullify the benefits of working with the generic list. Currently, I'm just running a foreach (Grid_Heading_Blk in myArrayList) operation to work with the ArrayList contents after passing the ArrayList to the class that will use it. Should I convert the ArrayList to a generic typed list? And if so, what is the most efficient way of doing so?

    Read the article

  • more efficient way to pickle a string

    - by gatoatigrado
    The pickle module seems to use string escape characters when pickling; this becomes inefficient e.g. on numpy arrays. Consider the following z = numpy.zeros(1000, numpy.uint8) len(z.dumps()) len(cPickle.dumps(z.dumps())) The lengths are 1133 characters and 4249 characters respectively. z.dumps() reveals something like "\x00\x00" (actual zeros in string), but pickle seems to be using the string's repr() function, yielding "'\x00\x00'" (zeros being ascii zeros). i.e. ("0" in z.dumps() == False) and ("0" in cPickle.dumps(z.dumps()) == True)

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >