Search Results

Search found 3512 results on 141 pages for 'premature optimization'.

Page 6/141 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • web page db query optimisation

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of Db hits. I dont want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimising at all. Currently the page requires 18 (thats right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (mySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page work, its simply a matter of optimization. My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens?

    Read the article

  • Best way to handle MySQL date for performance with thousands of users

    - by bitLost
    I am currently part of a team designing a site that will potentially have thousands of users who will be doing a number of date related searches. During the design phase we have been trying to determine which makes more sense for performance optimization. Should we store the datetime field as a mysql datetime. Or should be break it up into a number of fields (year, month, day, hour, minute, ...) The question is with a large data set and a potentially large set of users, would we gain performance wise breaking the datetime into multiple fields and saving on relying on mysql date functions? Or is mysql already optimized for this?

    Read the article

  • Shadow volume shader optimization (GLSL)

    - by Soubok
    I wondering if there is a way to optimize this vertex shader. This vertex shader projects (in the light direction) a vertex to the far plane if it is in the shadow. void main(void) { vec3 lightDir = (gl_ModelViewMatrix * gl_Vertex - gl_LightSource[0].position).xyz; // if the vertex is lit if ( dot(lightDir, gl_NormalMatrix * gl_Normal) < 0.01 ) { // don't move it gl_Position = ftransform(); } else { // move it far, is the light direction vec4 fin = gl_ProjectionMatrix * ( gl_ModelViewMatrix * gl_Vertex + vec4(normalize(lightDir) * 100000.0, 0.0) ); if ( fin.z > fin.w ) // if fin is behind the far plane fin.z = fin.w; // move to the far plane (needed for z-fail algo.) gl_Position = fin; } }

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • Mysql Sub Select Query Optimization

    - by Matt
    I'm running a query daily to compile stats - but it seems really inefficient. This is the Query: SELECT a.id, tstamp, label_id, (SELECT author_id FROM b WHERE b.tid = a.id ORDER BY b.tstamp DESC LIMIT 1) AS author_id FROM a, b WHERE (status = '2' OR status = '3') AND category != 6 AND a.id = b.tid AND (b.type = 'C' OR b.type = 'R') AND a.tstamp1 BETWEEN {$timestamp_start} AND {$timestamp_end} ORDER BY b.tstamp DESC LIMIT 500 This query seems to run really slow. Apologies for the crap naming - I've been asked to not reveal the actual table names. The reason there is a sub select is because the outer select gets one row from the table a and it gets a row from table b. But also need to know the latest author_id from table b as well, so I run a subselect to return that one. I don't want to run another select inside a php loop - as that is also inefficient. It works correctly - I just need to find a much faster way of getting this data set.

    Read the article

  • Java code optimization on matrix windowing computes in more time

    - by rano
    I have a matrix which represents an image and I need to cycle over each pixel and for each one of those I have to compute the sum of all its neighbors, ie the pixels that belong to a window of radius rad centered on the pixel. I came up with three alternatives: The simplest way, the one that recomputes the window for each pixel The more optimized way that uses a queue to store the sums of the window columns and cycling through the columns of the matrix updates this queue by adding a new element and removing the oldes The even more optimized way that does not need to recompute the queue for each row but incrementally adjusts a previously saved one I implemented them in c++ using a queue for the second method and a combination of deques for the third (I need to iterate through their elements without destructing them) and scored their times to see if there was an actual improvement. it appears that the third method is indeed faster. Then I tried to port the code to Java (and I must admit that I'm not very comfortable with it). I used ArrayDeque for the second method and LinkedLists for the third resulting in the third being inefficient in time. Here is the simplest method in C++ (I'm not posting the java version since it is almost identical): void normalWindowing(int mat[][MAX], int cols, int rows, int rad){ int i, j; int h = 0; for (i = 0; i < rows; ++i) { for (j = 0; j < cols; j++) { h = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { for (int rx =- rad; rx <= rad; rx++) { int x = j + rx; if (x >= 0 && x < cols) { h += mat[y][x]; } } } } } } } Here is the second method (the one optimized through columns) in C++: void opt1Windowing(int mat[][MAX], int cols, int rows, int rad){ int i, j, h, y, col; queue<int>* q = NULL; for (i = 0; i < rows; ++i) { if (q != NULL) delete(q); q = new queue<int>(); h = 0; for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][rx]; } } q->push(mem); h += mem; } } for (j = 1; j < cols; j++) { col = j + rad; if (j - rad > 0) { h -= q->front(); q->pop(); } if (j + rad < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][col]; } } q->push(mem); h += mem; } } } } And here is the Java version: public static void opt1Windowing(int [][] mat, int rad){ int i, j = 0, h, y, col; int cols = mat[0].length; int rows = mat.length; ArrayDeque<Integer> q = null; for (i = 0; i < rows; ++i) { q = new ArrayDeque<Integer>(); h = 0; for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][rx]; } } q.addLast(mem); h += mem; } } j = 0; for (j = 1; j < cols; j++) { col = j + rad; if (j - rad > 0) { h -= q.peekFirst(); q.pop(); } if (j + rad < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][col]; } } q.addLast(mem); h += mem; } } } } I recognize this post will be a wall of text. Here is the third method in C++: void opt2Windowing(int mat[][MAX], int cols, int rows, int rad){ int i = 0; int j = 0; int h = 0; int hh = 0; deque< deque<int> *> * M = new deque< deque<int> *>(); for (int ry = 0; ry <= rad; ry++) { if (ry < rows) { deque<int> * q = new deque<int>(); M->push_back(q); for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int val = mat[ry][rx]; q->push_back(val); h += val; } } } } deque<int> * C = new deque<int>(M->front()->size()); deque<int> * Q = new deque<int>(M->front()->size()); deque<int> * R = new deque<int>(M->size()); deque< deque<int> *>::iterator mit; deque< deque<int> *>::iterator mstart = M->begin(); deque< deque<int> *>::iterator mend = M->end(); deque<int>::iterator rit; deque<int>::iterator rstart = R->begin(); deque<int>::iterator rend = R->end(); deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); for (mit = mstart, rit = rstart; mit != mend, rit != rend; ++mit, ++rit) { deque<int>::iterator pit; deque<int>::iterator pstart = (* mit)->begin(); deque<int>::iterator pend = (* mit)->end(); for(cit = cstart, pit = pstart; cit != cend && pit != pend; ++cit, ++pit) { (* cit) += (* pit); (* rit) += (* pit); } } for (i = 0; i < rows; ++i) { j = 0; if (i - rad > 0) { deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); deque<int>::iterator pit; deque<int>::iterator pstart = (M->front())->begin(); deque<int>::iterator pend = (M->front())->end(); for(cit = cstart, pit = pstart; cit != cend; ++cit, ++pit) { (* cit) -= (* pit); } deque<int> * k = M->front(); M->pop_front(); delete k; h -= R->front(); R->pop_front(); } int row = i + rad; if (row < rows && i > 0) { deque<int> * newQ = new deque<int>(); M->push_back(newQ); deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); int rx; int tot = 0; for (rx = 0, cit = cstart; rx <= rad; rx++, ++cit) { if (rx < cols) { int val = mat[row][rx]; newQ->push_back(val); (* cit) += val; tot += val; } } R->push_back(tot); h += tot; } hh = h; copy(C->begin(), C->end(), Q->begin()); for (j = 1; j < cols; j++) { int col = j + rad; if (j - rad > 0) { hh -= Q->front(); Q->pop_front(); } if (j + rad < cols) { int val = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { val += mat[y][col]; } } hh += val; Q->push_back(val); } } } } And finally its Java version: public static void opt2Windowing(int [][] mat, int rad){ int cols = mat[0].length; int rows = mat.length; int i = 0; int j = 0; int h = 0; int hh = 0; LinkedList<LinkedList<Integer>> M = new LinkedList<LinkedList<Integer>>(); for (int ry = 0; ry <= rad; ry++) { if (ry < rows) { LinkedList<Integer> q = new LinkedList<Integer>(); M.addLast(q); for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int val = mat[ry][rx]; q.addLast(val); h += val; } } } } int firstSize = M.getFirst().size(); int mSize = M.size(); LinkedList<Integer> C = new LinkedList<Integer>(); LinkedList<Integer> Q = null; LinkedList<Integer> R = new LinkedList<Integer>(); for (int k = 0; k < firstSize; k++) { C.add(0); } for (int k = 0; k < mSize; k++) { R.add(0); } ListIterator<LinkedList<Integer>> mit; ListIterator<Integer> rit; ListIterator<Integer> cit; ListIterator<Integer> pit; for (mit = M.listIterator(), rit = R.listIterator(); mit.hasNext();) { Integer r = rit.next(); int rsum = 0; for (cit = C.listIterator(), pit = (mit.next()).listIterator(); cit.hasNext();) { Integer c = cit.next(); Integer p = pit.next(); rsum += p; cit.set(c + p); } rit.set(r + rsum); } for (i = 0; i < rows; ++i) { j = 0; if (i - rad > 0) { for(cit = C.listIterator(), pit = M.getFirst().listIterator(); cit.hasNext();) { Integer c = cit.next(); Integer p = pit.next(); cit.set(c - p); } M.removeFirst(); h -= R.getFirst(); R.removeFirst(); } int row = i + rad; if (row < rows && i > 0) { LinkedList<Integer> newQ = new LinkedList<Integer>(); M.addLast(newQ); int rx; int tot = 0; for (rx = 0, cit = C.listIterator(); rx <= rad; rx++) { if (rx < cols) { Integer c = cit.next(); int val = mat[row][rx]; newQ.addLast(val); cit.set(c + val); tot += val; } } R.addLast(tot); h += tot; } hh = h; Q = new LinkedList<Integer>(); Q.addAll(C); for (j = 1; j < cols; j++) { int col = j + rad; if (j - rad > 0) { hh -= Q.getFirst(); Q.pop(); } if (j + rad < cols) { int val = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { val += mat[y][col]; } } hh += val; Q.addLast(val); } } } } I guess that most is due to the poor choice of the LinkedList in Java and to the lack of an efficient (not shallow) copy method between two LinkedList. How can I improve the third Java method? Am I doing some conceptual error? As always, any criticisms is welcome. UPDATE Even if it does not solve the issue, using ArrayLists, as being suggested, instead of LinkedList improves the third method. The second one performs still better (but when the number of rows and columns of the matrix is lower than 300 and the window radius is small the first unoptimized method is the fastest in Java)

    Read the article

  • Matlab: Optimization by perturbing variable

    - by S_H
    My main script contains following code: %# Grid and model parameters nModel=50; nModel_want=1; nI_grid1=5; Nth=1; nRow.Scale1=5; nCol.Scale1=5; nRow.Scale2=5^2; nCol.Scale2=5^2; theta = 90; % degrees a_minor = 2; % range along minor direction a_major = 5; % range along major direction sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1 %# Covariance computation % Scale 1 for ihRow = 1:nRow.Scale1 for ihCol = 1:nCol.Scale1 [cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp'); end end % Scale 2 for ihRow = 1:nRow.Scale2 for ihCol = 1:nCol.Scale2 [cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp'); end end %# Scale-up of fine scale values by averaging [covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1); I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following: function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type) % mod_type should be in strings angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)]; h = [hx; hy]; lambda = a_minor/a_major; D_lambda = [lambda 0; 0 1]; h_2prime = D_lambda*R_theta*h; h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2)); if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1 if h_eff<=a cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3)); else cov = sill; end elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1 cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor))); elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1 cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2))))); end and function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=... general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,... nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega) % This function computes average of a property and variance of that averaged % property using power averaging PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT); %# Average of property for k=1:nUpscaledT, for j=1:nUpscaledCol, for i=1:nUpscaledRow, sum=0; for a=1:blocksize_row, for b=1:blocksize_col, for c=1:blocksize_t, sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable end end end PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property end end end %# Variance of averageed property variance_PropertyAvg=var(reshape(PropertyAvg,... nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1); %# Normalized variance of averageed property NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(... PropertyArray,numel(PropertyArray),1),1,1)); Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables 1) a_minor 2) a_major 3) theta Thanks.

    Read the article

  • Compiler optimization of repeated accessor calls

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • Project Euler: Programmatic Optimization for Problem 7?

    - by bmucklow
    So I would call myself a fairly novice programmer as I focused mostly on hardware in my schooling and not a lot of Computer Science courses. So I solved Problem 7 of Project Euler: By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13. What is the 10001st prime number? I managed to solve this without problem in Java, but when I ran my solution it took 8 and change seconds to run. I was wondering how this could be optimized from a programming standpoint, not a mathematical standpoint. Is the array looping and while statements the main things eating up processing time? And how could this be optimized? Again not looking for a fancy mathematical equation..there are plenty of those in the solution thread. SPOILER My solution is listed below. public class PrimeNumberList { private ArrayList<BigInteger> primesList = new ArrayList<BigInteger>(); public void fillList(int numberOfPrimes) { primesList.add(new BigInteger("2")); primesList.add(new BigInteger("3")); while (primesList.size() < numberOfPrimes){ getNextPrime(); } } private void getNextPrime() { BigInteger lastPrime = primesList.get(primesList.size()-1); BigInteger currentTestNumber = lastPrime; BigInteger modulusResult; boolean prime = false; while(!prime){ prime = true; currentTestNumber = currentTestNumber.add(new BigInteger("2")); for (BigInteger bi : primesList){ modulusResult = currentTestNumber.mod(bi); if (modulusResult.equals(BigInteger.ZERO)){ prime = false; break; } } if(prime){ primesList.add(currentTestNumber); } } } public BigInteger get(int primeTerm) { return primesList.get(primeTerm - 1); } }

    Read the article

  • Python optimization

    - by Rami Jarrar
    Hi, I do like this: f = open('wl4.txt', 'w') hh = 0 ###################################### for n in range(1,5): for l in range(33,127): if n==1: b = chr(l) + '\n' f.write(b) hh += 1 elif n==2: for s0 in range(33, 127): b = chr(l) + chr(s0) + '\n' f.write(b) hh += 1 elif n==3: for s0 in range(33, 127): for s1 in range(33, 127): b = chr(l) + chr(s0) + chr(s1) + '\n' f.write(b) hh += 1 elif n==4: for s0 in range(33, 127): for s1 in range(33, 127): for s2 in range(33,127): b = chr(l) + chr(s0) + chr(s1) + chr(s2) + '\n' f.write(b) hh += 1 ###################################### print "We Made %d Words." %(hh) ###################################### f.close() So, is there any method to make it faster?

    Read the article

  • Open space sitting optimization algorithm

    - by Georgy Bolyuba
    As a result of changes in the company, we have to rearrange our sitting plan: one room with 10 desks in it. Some desks are more popular than others for number of reasons. One solution would be to draw a desk number from a hat. We think there is a better way to do it. We have 10 desks and 10 people. Lets give every person in this contest 50 hypothetical tokens to bid on the desks. There is no limit of how much you bid on one desk, you can put all 50, which would be saying "I want to sit only here, period". You can also say "I do not care" by giving every desk 5 tokens. Important note: nobody knows what other people are doing. Everyone has to decide based only on his/her best interest (sounds familiar?) Now lets say we obtained these hypothetical results: # | Desk# >| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 1 | Alise | 30 | 2 | 2 | 1 | 0 | 0 | 0 | 15 | 0 | 0 | = 50 2 | Bob | 20 | 15 | 0 | 10 | 1 | 1 | 1 | 1 | 1 | 0 | = 50 ... 10 | Zed | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | = 50 Now, what we need to find is that one (or more) configuration(s) that gives us maximum satisfaction (i.e. people get desks they wanted taking into account all the bids and maximizing on the total of the group. Naturally the assumption is the more one bade on the desk the more he/she wants it). Since there are only 10 people, I think we can brute force it looking into all possible configurations, but I was wondering it there is a better algorithm for solving this kind of problems?

    Read the article

  • Compilier optimization of repeated accessor calls C#

    - by apocalypse9
    I've found recently that for some types of financial calculations that the following pattern is much easier to follow and test especially in situations where we may need to get numbers from various stages of the computation. public class nonsensical_calculator { ... double _rate; int _term; int _days; double monthlyRate { get { return _rate / 12; }} public double days { get { return (1 - i); }} double ar { get { return (1+ days) /(monthlyRate * days) double bleh { get { return Math.Pow(ar - days, _term) public double raar { get { return bleh * ar/2 * ar / days; }} .... } Obviously this often results in multiple calls to the same accessor within a given formula. I was curious as to whether or not the compiler is smart enough to optimize away these repeated calls with no intervening change in state, or whether this style is causing a decent performance hit. Further reading suggestions are always appreciated

    Read the article

  • Optimization of running total calculation in SQL for multiple values per join condition

    - by Kiril
    I have the following table (test_table): date value --------------- d1 10.0 d1 20.0 d2 60.0 d2 10.0 d2 -20.0 d3 40.0 I calculate the running total as follows. I use the same query twice, because first I need to calculate the values for a specifi date, and afterwards I can calculate the running total. Otherwise, joining the two tables where date is not unique, I would get too many results from the join: SELECT t1.date, SUM(t2.value) AS total FROM (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t1 JOIN (SELECT date, SUM(value) AS value FROM test_table GROUP BY date) AS t2 ON t1.date >= t2.date GROUP BY t1.date ORDER BY t1.date This gives me (which is fine): date total ------------- d1 30.0 d2 80.0 d3 120.0 BUT, this query isn't very efficient, because I need to change conditions in two places, if necessary. In production, the test_table is a lot bigger ( 4 Mio. rows), and the query takes too much time to complete. Question: How can I avoid using the same query twice?

    Read the article

  • optimization mvc code

    - by user276640
    i have such code var prj = _dataContext.Project.FirstOrDefault(p => p.isPopular == true); if (prj != null) { prj.isPopular = false; _dataContext.SaveChanges(); } prj = Details(id); prj.isPopular = true; _dataContext.SaveChanges(); idea-i have only one record with value true in field isPopular, so i get it and make false, then i get object by id and make it isPopular true. i don't like 2 calls on savechanges. any ideas?

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • MySQL query optimization JOIN

    - by Pierre
    Hi, I need your help to optimize those mysql query, both are in my slow query logs. SELECT a.nom, c.id_apps, c.id_commentaire, c.id_utilisateur, c.note_commentaire, u.nom_utilisateur FROM comments AS c LEFT JOIN apps AS a ON c.id_apps = a.id_apps LEFT JOIN users AS u ON c.id_utilisateur = u.id_utilisateur ORDER BY c.date_commentaire DESC LIMIT 5; There is a MySQL INDEX on c.id_apps, a.id_apps, c.id_utilisateur, u.id_utilisateur and c.date_commentaire. SELECT a.id_apps, a.id_itunes, a.nom, a.prix, a.resume, c.nom_fr_cat, e.nom_edit FROM apps AS a LEFT JOIN cat AS c ON a.categorie = c.id_cat LEFT JOIN edit AS e ON a.editeur = e.id_edit ORDER BY a.id_apps DESC LIMIT 20; There is a MySQL INDEX on a.categorie, c.id_cat, a.editeur, e.id_edit and a.id_apps Thanks

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Solver Foundation Optimization - 1D Bin Packing

    - by Val Nolav
    I want to optimize loading marbles into trucks. I do not know, if I can use Solver Foundation class for that purpose. Before, I start writing code, I wanted to ask it here. 1- Marbles can be in any weight between 1 to 24 Tons. 2 - A truck can hold maximum of 24 Tons. 3- It can be loaded as many marble cubes, as it can take for upto 24 tones, which means there is no Volume limitation. 4- There can be between 200 up to 500 different marbles depending on time. GOAL - The goal is to load marbles in minimum truck shipment. How can I do that without writing a lot of if conditions and for loops? Can I use Microsoft Solver Foundation for that purpose? I read the documentation provided by Microsoft however, I could not find a scenario similar to mine. M1+ M2 + M3 + .... Mn <=24 this is for one truck shipment. Let say there are 200 different Marbles and Marble weights are Float. Thanks

    Read the article

  • query optimization

    - by Gaurav
    I have a query of the form SELECT uid1,uid2 FROM friend WHERE uid1 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') and uid2 IN (SELECT uid2 FROM friend WHERE uid1='.$user_id.') The problem now is that the nested query SELECT uid2 FROM friend WHERE uid1='.$user_id.' returns a very large number of ids(approx. 5000). The table structure of the friend table is uid1(int), uid2(int). This table is used to determine whether two users are linked together as friends. Any workaround? Can I write the query in a different way? Or is there some other way to solve this issue. I'm sure I am not the first person to face such a problem. Any help would be greatly appreciated.

    Read the article

  • Haskell optimization of a function looking for a bytestring terminator

    - by me2
    Profiling of some code showed that about 65% of the time I was inside the following code. What it does is use the Data.Binary.Get monad to walk through a bytestring looking for the terminator. If it detects 0xff, it checks if the next byte is 0x00. If it is, it drops the 0x00 and continues. If it is not 0x00, then it drops both bytes and the resulting list of bytes is converted to a bytestring and returned. Any obvious ways to optimize this? I can't see it. parseECS = f [] False where f acc ff = do b <- getWord8 if ff then if b == 0x00 then f (0xff:acc) False else return $ L.pack (reverse acc) else if b == 0xff then f acc True else f (b:acc) False

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • (x86) Assembler Optimization

    - by Pindatjuh
    I'm building a compiler/assembler/linker in Java for the x86-32 (IA32) processor targeting Windows. High-level concepts of a "language" (in essential a Java API for creating executables) are translated into opcodes, which then are wrapped and outputted to a file. The translation process has several phases, one is the translation between languages: the highest-level code is translated into the medium-level code which is then translated into the lowest-level code (probably more than 3 levels). My problem is the following; if I have higher-level code (X and Y) translated to lower-level code (x, y, U and V), then an example of such a translation is, in pseudo-code: x + U(f) // generated by X + V(f) + y // generated by Y (An easy example) where V is the opposite of U (compare with a stack push as U and a pop as V). This needs to be 'optimized' into: x + y (essentially removing the "useless" code) My idea was to use regular expressions. For the above case, it'll be a regular expression looking like this: x:(U(x)+V(x)):null, meaning for all x find U(x) followed by V(x) and replace by null. Imagine more complex regular expressions, for more complex optimizations. This should work on all levels. What do you suggest? What would be a good approach to optimize in these situations?

    Read the article

  • Execution Plan Optimization when where clause is removed then added back

    - by nmushov
    I have a stored procedure that uses a table valued function which executes in 9 seconds. If I alter the table valued function and remove the where clause, the stored procedure executes in 3 seconds. If I add the where clause back, the query still executes in 3 seconds. I took a look at the execution plans and it appears that after I remove the where clause, the execution plan includes parallelism and the scan count for 2 of my tables drops for 50000 and 65000 down to 5 and 3. After I add the where clause back, the optimized execution plan still runs unless I run DBCC FREEPROCCACHE. Questions 1. Why would SQL Server start using the optimized execution plan for both queries only when I first remove the where clause? Is there a way to force SQL Server to use this execution plan? Also, this is a paramaterized all-in-one query that uses the (Parameter is null or Parameter) in the where clause, which I believe is bad for performance. RETURNS TABLE AS RETURN ( SELECT TOP (@PageNumber * @PageSize) CASE WHEN @SortOrder = 'Expensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice DESC) WHEN @SortOrder = 'Inexpensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice ASC) WHEN @SortOrder = 'LowMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage ASC) WHEN @SortOrder = 'HighMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage DESC) WHEN @SortOrder = 'Closest' THEN ROW_NUMBER() OVER (ORDER BY P1.Distance ASC) WHEN @SortOrder = 'Newest' THEN ROW_NUMBER() OVER (ORDER BY [Year] DESC) WHEN @SortOrder = 'Oldest' THEN ROW_NUMBER() OVER (ORDER BY [Year] ASC) ELSE ROW_NUMBER() OVER (ORDER BY InventoryID ASC) END as rn, P1.InventoryID, P1.SellingPrice, P1.Distance, P1.Mileage, Count(*) OVER () RESULT_COUNT, dimCarStatus.[year] FROM (SELECT InventoryID, SellingPrice, Zip.Distance, Mileage, ColorKey, CarStatusKey, CarKey FROM facInventory JOIN @ZipCodes Zip ON Zip.DealerKey = facInventory.DealerKey) as P1 JOIN dimColor ON dimColor.ColorKey = P1.ColorKey JOIN dimCarStatus ON dimCarStatus.CarStatusKey = P1.CarStatusKey JOIN dimCar ON dimCar.CarKey = P1.CarKey WHERE (@ExteriorColor is NULL OR dimColor.ExteriorColor like @ExteriorColor) AND (@InteriorColor is NULL OR dimColor.InteriorColor like @InteriorColor) AND (@Condition is NULL OR dimCarStatus.Condition like @Condition) AND (@Year is NULL OR dimCarStatus.[Year] like @Year) AND (@Certified is NULL OR dimCarStatus.Certified like @Certified) AND (@Make is NULL OR dimCar.Make like @Make) AND (@ModelCategory is NULL OR dimCar.ModelCategory like @ModelCategory) AND (@Model is NULL OR dimCar.Model like @Model) AND (@Trim is NULL OR dimCar.Trim like @Trim) AND (@BodyType is NULL OR dimCar.BodyType like @BodyType) AND (@VehicleTypeCode is NULL OR dimCar.VehicleTypeCode like @VehicleTypeCode) AND (@MinPrice is NULL OR P1.SellingPrice >= @MinPrice) AND (@MaxPrice is NULL OR P1.SellingPrice < @MaxPrice) AND (@Mileage is NULL OR P1.Mileage < @Mileage) ORDER BY CASE WHEN @SortOrder = 'Expensive' THEN -SellingPrice WHEN @SortOrder = 'Inexpensive' THEN SellingPrice WHEN @SortOrder = 'LowMiles' THEN Mileage WHEN @SortOrder = 'HighMiles' THEN -Mileage WHEN @SortOrder = 'Closest' THEN P1.Distance WHEN @SortOrder = 'Newest' THEN -[YEAR] WHEN @SortOrder = 'Oldest' THEN [YEAR] ELSE InventoryID END )

    Read the article

  • Java variables -> replace? RAM optimization

    - by poeschlorn
    Hi guys, I just wanted to know what happens behind my program when I declare and initialize a variable and later initialize it again with other values, e.g. an ArrayList or something similar. What happens in my RAM, when I say e.g. this: ArrayList<String> al = new ArrayList<String>(); ...add values, work with it and so on.... al = new ArrayList<String>(); So is my first ArrayList held in RAM or will the second ArrayList be stored on the same position where the first one has been before? Or will it just change the reference of "al"? If it is not replaced...is there a way to manually free the RAM which was occupied by the first arraylist? (without waiting for the garbage collector) Would it help to set it first =null? Nice greetings, poeschlorn

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >