Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 89/773 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • How do I make the following interaction with mySQL more efficient?

    - by Travis
    I've got an array that contains combinations of unique MySql IDs: For example: [ [1,10,11], [2,10], [3,10,12], [3,12,13,20], [4,12] ] In total there are a couple hundred different combinations of IDs. Some of these combinations are "valid" and some are not. For example, [1,10,11] may be a valid combination, whereas [3,10,12] may be invalid. Combinations are valid or invalid depending on how the data is arranged in the database. Currently I am using a SELECT statement to determine whether or not a specific combination of IDs is valid. It looks something like this: SELECT id1 FROM table WHERE id2 IN ($combination) GROUP BY id1 HAVING COUNT(distinct id2) = $number ...where $combination is one possible combination of IDs (eg 1,10,11) and $number is the number of IDs in that combination (in this case, 3). An invalid combination will return 0 rows. A valid combination will return 1 or more rows. However, to solve the entire set of possible combinations means looping a couple hundred SELECT statements, which I would rather not be doing. I am wondering: Are there any tricks for making this more efficient? Is it possible to submit the entire dataset to mySQL in one go, and have mySQL iterate through it? Any suggestions would be much appreciated. Thanks in advance!

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Database design and foreign keys: Where should they be added in related tables?

    - by Carvell Fenton
    I have a relatively simple subset of tables in my database for tracking something called sessions. These are academic sessions (think offerings of a particular program). The tables to represent a sessions information are: sessions session_terms session_subjects session_mark_item_info session_marks All of these tables have their own primary keys, and are like a tree, in that sessions have terms, terms have subjects, subjects have mark items, etc. So each on would have at least its "parent's" foreign key. My question is, design wise is it a good idea to include the sessions primary key in the other tables as a foreign key to easily select related session items, or is that too much redundency? If I include the session foreign key (or all parent foreign keys from tables up the heirarchy) in all the tables, I can easily select all the marks for a session. As an example, something like SELECT mark FROM session_marks WHERE sessionID=... If I don't, then I would have to combine selects with something like WHERE something IN (SELECT... Which approach is "more correct" or efficient? Thanks in advance!

    Read the article

  • Searching with Linq

    - by Phil
    I have a collection of objects, each with an int Frame property. Given an int, I want to find the object in the collection that has the closest Frame. Here is what I'm doing so far: public static void Search(int frameNumber) { var differences = (from rec in _records select new { FrameDiff = Math.Abs(rec.Frame - frameNumber), Record = rec }).OrderBy(x => x.FrameDiff); var closestRecord = differences.FirstOrDefault().Record; //continue work... } This is great and everything, except there are 200,000 items in my collection and I call this method very frequently. Is there a relatively easy, more efficient way to do this?

    Read the article

  • MySqli Prepared Statment Fetch Row

    - by pws5068
    I'm looking for an efficient way to select a specific row with a php prepared statement. $iDB = new mysqliDB(); $stmt = $iDB->prepare($sql); $stmt->bind_param('s',$searhStr); $stmt->execute(); $stmt->store_result(); $stmt->bind_result($objID,$typeID,$year,$updated); // Now jump to row $rowNumber echo("Obj ID = {$objID}"); Any suggestions? Thanks!

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • What is a faster way of merging the values of this Python structure into a single dictionary?

    - by jcoon
    I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient. I have a dictionary of dictionaries, like this: groups_and_classes = {'group_1': {'class_A': [1, 2, 3], 'class_B': [1, 3, 5, 7], 'class_c': [1, 2], # ...many more items like this }, 'group_2': {'class_A': [11, 12, 13], 'class_C': [5, 6, 7, 8, 9] }, # ...and many more items like this } A function creates a new object from groups_and_classes like this (the function to create this is called often): all_classes = {'class_A': [1, 2, 3, 11, 12, 13], 'class_B': [1, 3, 5, 7, 9], 'class_C': [1, 2, 5, 6, 7, 8, 9] } Right now, there is a loop that does this: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): for v in vals: if all_classes.has_key(c): if v not in all_classes[c]: all_classes[c].append(v) else: all_classes[c] = [v] So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): try: all_classes[c].update(set(vals)) except KeyError: all_classes[c] = set(vals) This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code. Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • Best way to simplify this code, more efficient

    - by Derek
    My question is, is there a way to make this code more efficient or write it in a simple way? javascript by the way. switch (tempvar1) { case 1: currentSlide = 'slide1'; showaslide('ppslide1'); break; case 2: currentSlide = 'slide2'; showaslide('ppslide2'); break; case 3: currentSlide = 'slide3'; showaslide('ppslide3'); break; case 4: currentSlide = 'slide4'; showaslide('ppslide4'); break; case 5: currentSlide = 'slide5'; showaslide('ppslide5'); break; case 6: currentSlide = 'slide6'; showaslide('ppslide6'); break; // 20 total cases }

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Minify an Entire Directory While Keeping Element/Style/Script Relationships?

    - by Jonathan Sampson
    Do any solutions currnetly exist that can minify an entire project directory? More importantly, do any solutions exist that can shorten classnames, id's, and keep them consistent throughout all documents? Something that can turn this: Index.html --- <div class="fooBar"> <!-- Code --> </div> Styles.css --- .fooBar { // Comments and Messages background-color:#000000; } Index.js --- $(".fooBar").click(function(){ /* More Comments */ alert("fooBar"); }); Into This: Index.html --- <div class="a"></div> Styles.css --- .a{background-color:#000;} Index.js --- $(".a").click(function(){alert("fooBar");});

    Read the article

  • How Can I Improve/SpeedUp This FrequentFunction in C?

    - by Peter Lee
    Hi folks, How can I improve / speed up this frequent function? #include <math.h> #include <stdio.h> #include <stdlib.h> #include <time.h> // Assumptions: x, y, z, a, b and c are all array of 10. // Requirement: return the value of ret; // get all elements of array c float fnFrequentFunction(const float* x, const float* y, const float* z, const float *a, const float *b, float *c, int M) { register float tmp; register float sum; register float ret = 0; int i; for (i = 0; i < M; i++) // M == 1, 2, 4, or 8 { sum = 0; tmp = x[0] - y[0]; sum += tmp * tmp * z[0]; tmp = x[1] - y[1]; sum += tmp * tmp * z[1]; tmp = x[2] - y[2]; sum += tmp * tmp * z[2]; tmp = x[3] - y[3]; sum += tmp * tmp * z[3]; tmp = x[4] - y[4]; sum += tmp * tmp * z[4]; tmp = x[5] - y[5]; sum += tmp * tmp * z[5]; tmp = x[6] - y[6]; sum += tmp * tmp * z[6]; tmp = x[7] - y[7]; sum += tmp * tmp * z[7]; tmp = x[8] - y[8]; sum += tmp * tmp * z[8]; tmp = x[9] - y[9]; sum += tmp * tmp * z[9]; ret += (c[i] = log(a[i] * b[i]) + sum); } return ret; } int main() { float x[10] = {0.001251f, 0.563585f, 0.193304f, 0.808741f, 0.585009f, 0.479873f, 0.350291f, 0.895962f, 0.622840f, 0.746605f}; float y[10] = {0.864406f, 0.709006f, 0.091433f, 0.995727f, 0.227180f, 0.902585f, 0.659047f, 0.865627f, 0.846767f, 0.514359f}; float z[10] = {0.866817f, 0.581347f, 0.175542f, 0.620197f, 0.781823f, 0.778588f, 0.938688f, 0.721610f, 0.940214f, 0.811353f}; float a[10] = {0.870205f, 0.733879f, 0.711386f, 0.588244f, 0.484176f, 0.852962f, 0.168126f, 0.684286f, 0.072573f, 0.632160f}; float b[10] = {0.871487f, 0.998108f, 0.798608f, 0.134831f, 0.576281f, 0.410779f, 0.402936f, 0.522935f, 0.623218f, 0.193030f}; float c[8]; int i; int n = 10000000; long start; // Speed test here: start = clock(); while(--n) fnFrequentFunction(x, y, z, a, b, c, 8); printf("Time used: %ld\n", clock() - start); printf("fnFrequentFunction == %f\n", fnFrequentFunction(x, y, z, a, b, c, 8)); for(i = 0; i < 8; ++i) printf(" c[%d] == %f\n", i, c[i]); printf("\n"); return 0; } Any suggestions are welcome :-)

    Read the article

  • mySQL - One large query vs Ajax indivdual queries

    - by Mark
    Hi guys, I guess no one will have a definative answer to this but considered predictions would be appriciated. I am in the process of developing a mySQL database for a web application and my question is: Is it more efficient to make a single query that returns a single row using AJAX or To request 100 - 700 rows when the user will likely only ever use the results of two or three? Really I am asking what is heavier for the server 2-3 requests with one result or 1 request with 100 - 700 results? Thanks, Mark

    Read the article

  • Reading large excel file with PHP

    - by Itamar Bar-Lev
    I'm trying to read a 17MB excel file (2003) with PHPExcel1.7.3c, but it crushes already while loading the file, after exceeding the 120 seconds limit I have. Is there another library that can do it more efficiently? I have no need in styling, I only need it to support UTF8. Thanks for your help

    Read the article

  • Why is Perl commonly used as CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • Ruby: Why is Array.sort slow for large objects?

    - by David Waller
    A colleague needed to sort an array of ActiveRecord objects in a Rails app. He tried the obvious Array.sort! but it seemed surprisingly slow, taking 32s for an array of 3700 objects. So just in case it was these big fat objects slowing things down, he reimplemented the sort by sorting an array of small objects, then reordering the original array of ActiveRecord objects to match - as shown in the code below. Tada! The sort now takes 700ms. That really surprised me. Does Ruby's sort method end up copying objects about the place rather than just references? He's using Ruby 1.8.6/7. def self.sort_events(events) event_sorters = Array.new(events.length) {|i| EventSorter.new(i, events[i])} event_sorters.sort! event_sorters.collect {|es| events[es.index]} end private # Class used by sort_events class EventSorter attr_reader :sqn attr_reader :time attr_reader :index def initialize(index, event) @index = index @sqn = event.sqn @time = event.time end def <=>(b) @time != b.time ? @time <=> b.time : @sqn <=> b.sqn end end

    Read the article

  • Most efficient approach for multilingual PHP website

    - by alexteg
    I am working on a large multilingual website and I am considering different approaches for making it multilingual. The possible alternatives I can think of are: The Gettext functions with generation of .po files One MySQL table with the translations and a unique string ID for each text PHP-files with arrays containing the different translations with unique string IDs As far as I have understood the Gettext functions should be most efficient, but my requirement is that it should be possible to change a text string in the original reference language (English) without the other translations of that string automatically reverting back to English just because a couple of words changed. Is this possible with Gettext? What is the least resource demanding solution? Is using the Gettext functions or PHP files with arrays more or less equally resource demanding? Any other suggestions for more efficient solutions?

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • The best way to do :not in jQuery?

    - by Smickie
    Hi, I have a menu in jQuery when you click on a link it opens up, but I want it so when you click somewhere else, anywhere else that is not the menu, it becomes hidden. At the moment I'm binding a click event to $(':not(#the_menu)') But this seems like I'm binding a click event to the entire minus the menu, is there a more efficient way of doing something like this?

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • C++: Efficiently adding integers to strings

    - by Shinka
    I know how to add integers to strings, but I'm not sure I'm doing it in an efficient matters. I have a class where I often have to return a string plus an integer (a different integer each time), in Java I would do something like public class MyClass { final static String S = "MYSTRING"; private int id = 0; public String getString() { return S + (id++); } } But in C++ I have to do; class MyClass { private: std::string S; // For some reason I can't do const std::string S = "MYSTRING"; int id; public: MyClass() { S = "MYSTRING"; id = 0; } std::string getString() { std::ostringstream oss; oss << S << id++; return oss.str(); } } An additional constraint: I don't want (in fact, in can't) use Boost or any other librairies, I'll have to work with the standard library. So the thing is; the code works, but in C++ I have to create a bunch of ostringstream objects, so it seems inefficient. To be fair, perhaps Java do the same and I just don't notice it, I say it's inefficient mostly because I know very little about strings. Is there a more efficient way to do this ?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >