Search Results

Search found 88709 results on 3549 pages for 'code efficiency'.

Page 72/3549 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • computing hash values, integral types versus struct/class

    - by aaa
    hello I would like to know if there is a difference in speed between computing hash value (for example std::map key) of primitive integral type, such as int64_t and pod type, for example struct { int16_t v[4]; };. I know this is going to implementation specific, so my question ultimately pertains to gnu standard library. Thanks

    Read the article

  • How do I get code coverage of Perl cgi script when executed by Selenium?

    - by Kurt W. Leucht
    I'm using Eclipse EPIC IDE to write some Perl cgi scripts which call some Perl modules that I have also written. The EPIC IDE lets me configure a Perl CGI "run configuration" which runs my CGI script. And then I've got Selenium set up and one of my unit test files runs some Selenium commands to run my cgi script through its paces. But the coverage report from Module::Build dispatch 'testcover' doesn't show that any of my module code has been executed. It's been executed by my cgi script, but I guess the CGI script was run manually and was not executed directly by my unit test file, so maybe that's why the coverage isn't being recognized. Is there a way to do this right so I can integrate Selenium and unit test files and code coverage all together somehow?

    Read the article

  • Please for efficient query

    - by user278618
    I want to have efficient query to get some rows from my table. Here is I think the best presentation of my table. -Somedate is not duplicated - it is date of modifiedon -a,b,c are parent ids, let say countryCode -1,2,3,4 are subparent, let say citycode -guids are id of rows -true, false are values of rows - one can name this column - freshAir a 1 GUID somedate true a 1 GUID somedate true a 2 GUID somedate false a 2 GUID somedate false b 3 GUID somedate false b 3 GUID somedate false b 3 GUID somedate false b 4 GUID somedate false c 5 GUID somedate true c 6 GUID somedate true c 6 GUID somedate false c 6 GUID somedate false c 7 GUID somedate false I want the most recent rows MAX(modifiedon) grouped by countrycode and citycode and in this groups I need elements which have another values (true, false). And in result I want: a 1 GUID somedate true a 2 GUID somedate false c 5 GUID somedate true c 6 GUID somedate false c 7 GUID somedate false Look that in result I don't want to have records with "b", because all rows have the same value (false). Best regards

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

  • ASP.NET NamingContainer naming convention

    - by EOLeary
    The Background Hello! I'm working on a project in which the client has required a lot of things to happen on a single page, and this has resulted in a rather large blob of HTML being rendered out to the client browser. The main issue is with input tags (where runat="server" attribute is set), these tend to cause a drastic increase in markup size due to validation, updatepanel triggers, viewstate, and the control markup itself. I've done what I can to reduce the amount of triggers I'm using, I'm compressing the viewstate (to something like 8% of the original viewstate size), I've gotten rid of a lot of ASP.NET Validators and rolled my own, and and I've been using ClientIdMode to reduce the length of the ID attributes of many asp.net elements. All of these combined significantly reduces the amount of HTML being sent to the client, (for example going from 2 megabytes for a request down to 500-600 kb - these are HUGE pages, mind you). The Issue One area which I've been having trouble reducing is simply the auto-generated 'name' attribute of input elements. <input name="ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText" type="text" value="Investigation Week 1" maxlength="100" id="_taskTree_0__taskDetails_0__detailList_0__row_0__descriptionText_0" style="width:170px;"> As you can see above, the name attribute is 139 out of 297 characters, that's almost 50% of the tag markup taken up by that HUGE name. Does anyone have any ideas on how to stick a hook in somewhere in ASP.NET where I can somehow translate these or generate them differently; say instead of ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText, it could be a GUID like 0x0AEED4B6445A11E08F873606E0D72085, which is 105 characters shorter. Any help would be greatly appreciated!

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • Objective C -- property lists or text files?

    - by William Jockusch
    I need to import a list of about 40,000 words into my Iphone app. The list will be the same every time the app starts. It seems that property lists and text files are reasonable options. Any reason to prefer one over the other? For reasons I don't understand, finder says the property list on my mac is 1MB, while the text file is only 328K. The property list is an NSMutableArray of NSMutableArrays of NSStrings. The text file is a plain txt file. But amount of time the app takes to start up is also important. If I read in a text file, my app would have to do some simple processing on it each time it starts. Thanks.

    Read the article

  • How can I graph the Lines of Code history for git repo?

    - by dbr
    Basically I want to get the number of lines-of-code in the repository after each commit. The only (really crappy) ways I have found is to use git filter-branch to run "wc -l *", and a script that run git reset --hard on each commit, then ran wc -l To make it a bit clearer, when the tool is run, it would output the lines of code of the very first commit, then the second and so on.. This is what I want the tool to output (as an example): me@something:~/$ gitsloc --branch master 10 48 153 450 1734 1542 I've played around with the ruby 'git' library, but the closest I found was using the .lines() method on a diff, which seems like it should give the added lines (but does not.. it returns 0 when you delete lines for example) require 'rubygems' require 'git' total = 0 g = Git.open(working_dir = '/Users/dbr/Desktop/code_projects/tvdb_api') last = nil g.log.each do |cur| diff = g.diff(last, cur) total = total + diff.lines puts total last = cur end

    Read the article

  • How to effectively copy an array in java ?

    - by Tony
    The toArray method in ArrayList , Bloch uses both System.arraycopy and Arrays.copyOf to copy an array . public <T> T[] toArray(T[] a) { if (a.length < size) // Make a new array of a's runtime type, but my contents: return (T[]) Arrays.copyOf(elementData, size, a.getClass()); System.arraycopy(elementData, 0, a, 0, size); if (a.length > size) a[size] = null; return a; } How to compare these two copy method , when to use which ? Thanks.

    Read the article

  • ASP.NET MVC reminds me of old Classic ASP spaghetti code...

    - by EdenMachine
    I just went through some MVC tutorials after checking this site out for a while. Is it just me, or does MVC View pages brinig back HORRIBLE flashbacks of Classic ASP spaghetti code with all the jumping in and out of HTML and ASP.NET with yellow delimiters everywhere making it impossible to read? What ever happened to the importance of code/design separation?? I was really sold on the new technology until the tutorials hit the View page development section. Or am I missing something? (And don't say you can use the template to help because it's jsut moving the spaghetti to another location - sweeps it under the rug - it doesn't fix the problem)

    Read the article

  • How do you energize yourself when working alone on a project?

    - by Stephane
    I am working in an environment with a very small team (3 developers only) and each of us have been assigned a different project, without counting support tasks. I know this is a bad business practice and that we should all work on a single project at a time, and then move on to the next one (Already explained to the management on how much it sucks). So don't answer me that we should work all together on one project at a time. Energizing the work when in a team is mostly pair programming we did that when less project were thrown at us and that was great. What I would like to know is how you energize your work when working alone on a project. Do you follow any particular practice?

    Read the article

  • C# String.Replace with a start/index (Added my (slow) implementation)

    - by Chris T
    I'd like an efficient method that would work something like this EDIT: Sorry I didn't put what I'd tried before. I updated the example now. // Method signature, Only replaces first instance or how many are specified in max public int MyReplace(ref string source,string org, string replace, int start, int max) { int ret = 0; int len = replace.Length; int olen = org.Length; for(int i = 0; i < max; i++) { // Find the next instance of the search string int x = source.IndexOf(org, ret + olen); if(x > ret) ret = x; else break; // Insert the replacement source = source.Insert(x, replace); // And remove the original source = source.Remove(x + len, olen); // removes original string } return ret; } string source = "The cat can fly but only if he is the cat in the hat"; int i = MyReplace(ref source,"cat", "giraffe", 8, 1); // Results in the string "The cat can fly but only if he is the giraffe in the hat" // i contains the index of the first letter of "giraffe" in the new string The only reason I'm asking is because my implementation I'd imagine getting slow with 1,000s of replaces.

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • How do I make the following interaction with mySQL more efficient?

    - by Travis
    I've got an array that contains combinations of unique MySql IDs: For example: [ [1,10,11], [2,10], [3,10,12], [3,12,13,20], [4,12] ] In total there are a couple hundred different combinations of IDs. Some of these combinations are "valid" and some are not. For example, [1,10,11] may be a valid combination, whereas [3,10,12] may be invalid. Combinations are valid or invalid depending on how the data is arranged in the database. Currently I am using a SELECT statement to determine whether or not a specific combination of IDs is valid. It looks something like this: SELECT id1 FROM table WHERE id2 IN ($combination) GROUP BY id1 HAVING COUNT(distinct id2) = $number ...where $combination is one possible combination of IDs (eg 1,10,11) and $number is the number of IDs in that combination (in this case, 3). An invalid combination will return 0 rows. A valid combination will return 1 or more rows. However, to solve the entire set of possible combinations means looping a couple hundred SELECT statements, which I would rather not be doing. I am wondering: Are there any tricks for making this more efficient? Is it possible to submit the entire dataset to mySQL in one go, and have mySQL iterate through it? Any suggestions would be much appreciated. Thanks in advance!

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • MySqli Prepared Statment Fetch Row

    - by pws5068
    I'm looking for an efficient way to select a specific row with a php prepared statement. $iDB = new mysqliDB(); $stmt = $iDB->prepare($sql); $stmt->bind_param('s',$searhStr); $stmt->execute(); $stmt->store_result(); $stmt->bind_result($objID,$typeID,$year,$updated); // Now jump to row $rowNumber echo("Obj ID = {$objID}"); Any suggestions? Thanks!

    Read the article

  • Database design and foreign keys: Where should they be added in related tables?

    - by Carvell Fenton
    I have a relatively simple subset of tables in my database for tracking something called sessions. These are academic sessions (think offerings of a particular program). The tables to represent a sessions information are: sessions session_terms session_subjects session_mark_item_info session_marks All of these tables have their own primary keys, and are like a tree, in that sessions have terms, terms have subjects, subjects have mark items, etc. So each on would have at least its "parent's" foreign key. My question is, design wise is it a good idea to include the sessions primary key in the other tables as a foreign key to easily select related session items, or is that too much redundency? If I include the session foreign key (or all parent foreign keys from tables up the heirarchy) in all the tables, I can easily select all the marks for a session. As an example, something like SELECT mark FROM session_marks WHERE sessionID=... If I don't, then I would have to combine selects with something like WHERE something IN (SELECT... Which approach is "more correct" or efficient? Thanks in advance!

    Read the article

  • Searching with Linq

    - by Phil
    I have a collection of objects, each with an int Frame property. Given an int, I want to find the object in the collection that has the closest Frame. Here is what I'm doing so far: public static void Search(int frameNumber) { var differences = (from rec in _records select new { FrameDiff = Math.Abs(rec.Frame - frameNumber), Record = rec }).OrderBy(x => x.FrameDiff); var closestRecord = differences.FirstOrDefault().Record; //continue work... } This is great and everything, except there are 200,000 items in my collection and I call this method very frequently. Is there a relatively easy, more efficient way to do this?

    Read the article

  • PHP News Feed Database & Design

    - by pws5068
    I'm designing a News Feed system using PHP/MySQL similar to facebook's. I have asked a similar question before but now I've changed the design and I'm looking for feedback. Example News: User_A commented on User_B's new album. "Hey man nice pictures!" User_B added a new Photo to [his/her] profile. [show photo thumbnail] Initially, I implemented this using excessive columns for Obj1:Type1 | Obj2:Type2 | etc.. Now the design is set up using a couple special keywords, and actor/receiver relationships. My database uses a table of messages joined on a table containing userid,actionid,receiverid,receiverObjectTypeID, Here's a condensed version of what it will look like once joined: News_ID | User_ID | Message | Timestamp 2643 A %a commented on %o's new %r. SomeTimestamp 2644 B %a added a new %r to [his/her] profile. SomeTimestamp %a = the User_ID of the person doing the action %r = the receiving object %o = the owner of the receiving object (for example the owner of the album) (NULL if %r is a user) Questions: Is this a smart (efficient/scalable) way to move forward? How can I show messages like: "User_B added 4 new photos to his profile."?

    Read the article

  • Technique to remove common words(and their plural versions) from a string

    - by Jake M
    I am attempting to find tags(keywords) for a recipe by parsing a long string of text. The text contains the recipe ingredients, directions and a short blurb. What do you think would be the most efficient way to remove common words from the tag list? By common words, I mean words like: 'the', 'at', 'there', 'their' etc. I have 2 methodologies I can use, which do you think is more efficient in terms of speed and do you know of a more efficient way I could do this? Methodology 1: - Determine the number of times each word occurs(using the library Collections) - Have a list of common words and remove all 'Common Words' from the Collection object by attempting to delete that key from the Collection object if it exists. - Therefore the speed will be determined by the length of the variable delims import collections from Counter delim = ['there','there\'s','theres','they','they\'re'] # the above will end up being a really long list! word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] return freq.most_common() Methodology 2: - For common words that can be plural, look at each word in the recipe string, and check if it partially contains the non-plural version of a common word. Eg; For the string "There's a test" check each word to see if it contains "there" and delete it if it does. delim = ['this','at','them'] # words that cant be plural partial_delim = ['there','they',] # words that could occur in many forms word_freq = Counter(recipe_str.lower().split()) for delim in set(delims): del word_freq[delim] # really slow for delim in set(partial_delims): for word in word_freq: if word.find(delim) != -1: del word_freq[delim] return freq.most_common()

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • How Can I Improve/SpeedUp This FrequentFunction in C?

    - by Peter Lee
    Hi folks, How can I improve / speed up this frequent function? #include <math.h> #include <stdio.h> #include <stdlib.h> #include <time.h> // Assumptions: x, y, z, a, b and c are all array of 10. // Requirement: return the value of ret; // get all elements of array c float fnFrequentFunction(const float* x, const float* y, const float* z, const float *a, const float *b, float *c, int M) { register float tmp; register float sum; register float ret = 0; int i; for (i = 0; i < M; i++) // M == 1, 2, 4, or 8 { sum = 0; tmp = x[0] - y[0]; sum += tmp * tmp * z[0]; tmp = x[1] - y[1]; sum += tmp * tmp * z[1]; tmp = x[2] - y[2]; sum += tmp * tmp * z[2]; tmp = x[3] - y[3]; sum += tmp * tmp * z[3]; tmp = x[4] - y[4]; sum += tmp * tmp * z[4]; tmp = x[5] - y[5]; sum += tmp * tmp * z[5]; tmp = x[6] - y[6]; sum += tmp * tmp * z[6]; tmp = x[7] - y[7]; sum += tmp * tmp * z[7]; tmp = x[8] - y[8]; sum += tmp * tmp * z[8]; tmp = x[9] - y[9]; sum += tmp * tmp * z[9]; ret += (c[i] = log(a[i] * b[i]) + sum); } return ret; } int main() { float x[10] = {0.001251f, 0.563585f, 0.193304f, 0.808741f, 0.585009f, 0.479873f, 0.350291f, 0.895962f, 0.622840f, 0.746605f}; float y[10] = {0.864406f, 0.709006f, 0.091433f, 0.995727f, 0.227180f, 0.902585f, 0.659047f, 0.865627f, 0.846767f, 0.514359f}; float z[10] = {0.866817f, 0.581347f, 0.175542f, 0.620197f, 0.781823f, 0.778588f, 0.938688f, 0.721610f, 0.940214f, 0.811353f}; float a[10] = {0.870205f, 0.733879f, 0.711386f, 0.588244f, 0.484176f, 0.852962f, 0.168126f, 0.684286f, 0.072573f, 0.632160f}; float b[10] = {0.871487f, 0.998108f, 0.798608f, 0.134831f, 0.576281f, 0.410779f, 0.402936f, 0.522935f, 0.623218f, 0.193030f}; float c[8]; int i; int n = 10000000; long start; // Speed test here: start = clock(); while(--n) fnFrequentFunction(x, y, z, a, b, c, 8); printf("Time used: %ld\n", clock() - start); printf("fnFrequentFunction == %f\n", fnFrequentFunction(x, y, z, a, b, c, 8)); for(i = 0; i < 8; ++i) printf(" c[%d] == %f\n", i, c[i]); printf("\n"); return 0; } Any suggestions are welcome :-)

    Read the article

  • mySQL - One large query vs Ajax indivdual queries

    - by Mark
    Hi guys, I guess no one will have a definative answer to this but considered predictions would be appriciated. I am in the process of developing a mySQL database for a web application and my question is: Is it more efficient to make a single query that returns a single row using AJAX or To request 100 - 700 rows when the user will likely only ever use the results of two or three? Really I am asking what is heavier for the server 2-3 requests with one result or 1 request with 100 - 700 results? Thanks, Mark

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >