Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 17/809 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • Insert takes too long, code optimization needed

    - by Pentium10
    I have some code I use to transfer a table1 values to another table2, they are sitting in different database. It's slow when I have 100.000 records. It takes forever to finish, 10+ minutes. (Windows Mobile smartphone) What can I do? cmd.CommandText = "insert into " + TableName + " select * from sync2." + TableName+""; cmd.ExecuteNonQuery(); EDIT The problem is not resolved. I am still after answers.

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • Optimization in Common Decalaration

    - by Pratik
    Its a 3-tier ASP.NET Website Project In Data Layer there is class "Common Decalaration" in which lot of common things are mentioned. Something this way : public class CommonDeclartion { #region Common Messages public const string RECORD_INSERT_MSG = "Record Inserted Successfully "; public const string RECORD_UPDATE_MSG = "Record Updated Successfully"; public const string RECORD_DELETE_MSG = "Record Deleted Successfully"; public const string ERROR_MSG = "Error Ocuured while Perfoming This Action."; public const string UserID_Incorrect = "Please Enter The Correct User ID."; public const string RECORD_ALREADY_EXIT = "Record Already Exit"; public const string NO_RECORD = "No Record found."; #endregion } Can this be more optimized in terms of : 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity I thought of using enum but can't figure that out : enum CommonMessages { RECORD_INSERT_MSG "Record Inserted Successfully.", RECORD_UPDATE_MSG "Record Updated Successfully.", RECORD_DELETE_MSG "Record Deleted Successfully.", ERROR_MSG "Error Ocuured while Perfoming This Action.", UserID_Incorrect "Please Enter The Correct User ID.", RECORD_ALREADY_EXIT "Record Already Exit.", NO_RECORD "No Record found.", } or else should keep them in some collections like dictionary/NameValueCollection or so or i have to keep them in XML in form of key/value pair and reterive from it ? What can be better way keeping in mind 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

  • C optimization breaks algorithm

    - by Halpo
    I am programming an algorithm that contains 4 nested for loops. The problem is at at each level a pointer is updated. The innermost loop only uses 1 of the pointers. The algorithm does a complicated count. When I include a debugging statement that logs the combination of the indexes and the results of the count I get the correct answer. When the debugging statement is omitted, the count is incorrect. The program is compiled with the -O3 option on gcc. Why would this happen?

    Read the article

  • Seeking free ODBC database optimization tool for non-experts

    - by mawg
    I'm a database n00b and am reading as many books as I can. I have been given responsibility for an ODBC tool where the databases were designed by a hardware engineer with some VB experience - which made him a s/w guru in the small firm at that time. Things are running slowly and I suspect that the db could have been designed better. I hope to learn enough to use Explain/Describe, etc maybe add some indices, but, in the meantime, is there any free for commercial use tool which can examine an ODBC database and suggest improvements. I'm just talking about db schema here, but maybe I should also be looking at optimizing Selects with Joins? Is there a tool for that? ODBC compliant.

    Read the article

  • sql server procedure optimization

    - by stackoverflow
    SQl Server 2005: Option: 1 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS compared to Option: 2 CREATE TABLE #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) INSERT INTO #test (customerid, orderdate, field1 INT, field2 INT, field3 INT) SELECT customerid, orderdate, field1, field2, field3 FROM ATABLERETURNING4000000ROWS CREATE UNIQUE CLUSTERED INDEX Idx1 ON #test(customerid) CREATE INDEX Idx2 ON #test(field1 DESC) CREATE INDEX Idx3 ON #test(field2 DESC) CREATE INDEX Idx4 ON #test(field3 DESC) When we use the second option it runs close to 50% faster. Why is this?

    Read the article

  • Help with code optimization

    - by Ockonal
    Hello, I've written a little particle system for my 2d-application. Here is raining code: // HPP ----------------------------------- struct Data { float x, y, x_speed, y_speed; int timeout; Data(); }; std::vector<Data> mData; bool mFirstTime; void processDrops(float windPower, int i); // CPP ----------------------------------- Data::Data() : x(rand()%ScreenResolutionX), y(0) , x_speed(0), y_speed(0), timeout(rand()%130) { } void Rain::processDrops(float windPower, int i) { int posX = rand() % mWindowWidth; mData[i].x = posX; mData[i].x_speed = WindPower*0.1; // WindPower is float mData[i].y_speed = Gravity*0.1; // Gravity is 9.8 * 19.2 // If that is first time, process drops randomly with window height if (mFirstTime) { mData[i].timeout = 0; mData[i].y = rand() % mWindowHeight; } else { mData[i].timeout = rand() % 130; mData[i].y = 0; } } void update(float windPower, float elapsed) { // If this is first time - create array with new Data structure objects if (mFirstTime) { for (int i=0; i < mMaxObjects; ++i) { mData.push_back(Data()); processDrops(windPower, i); } mFirstTime = false; } for (int i=0; i < mMaxObjects; i++) { // Sleep until uptime > 0 (To make drops fall with randomly timeout) if (mData[i].timeout > 0) { mData[i].timeout--; } else { // Find new x/y positions mData[i].x += mData[i].x_speed * elapsed; mData[i].y += mData[i].y_speed * elapsed; // Find new speeds mData[i].x_speed += windPower * elapsed; mData[i].y_speed += Gravity * elapsed; // Drawing here ... // If drop has been falled out of the screen if (mData[i].y > mWindowHeight) processDrops(windPower, i); } } } So the main idea is: I have some structure which consist of drop position, speed. I have a function for processing drops at some index in the vector-array. Now if that's first time of running I'm making array with max size and process it in cycle. But this code works slower that all another I have. Please, help me to optimize it. I tried to replace all int with uint16_t but I think it doesn't matter.

    Read the article

  • WPF PathGeometry/RotateTransform optimization

    - by devinb
    I am having performance issues when rendering/rotating WPF triangles If I had a WPF triangle being displayed and it will be rotated to some degree around a centrepoint, I can do it one of two ways: Programatically determine the points and their offset in the backend, use XAML to simply place them on the canvas where they belong, it would look like this: <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding CalculatedPointA, Mode=OneWay}"> <LineSegment Point="{Binding CalculatedPointB, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointC, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> </Path> Generate the 'same' triangle every time, and then use a RenderTransform (Rotate) to put it where it belongs. In this case, the rotation calculations are being obfuscated, because I don't have any access to how they are being done. <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding TriPointA, Mode=OneWay}"> <LineSegment Point="{Binding TriPointB, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointC, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> <Path.RenderTransform> <RotateTransform CenterX="{Binding Centre.X, Mode=OneWay}" CenterY="{Binding Centre.Y, Mode=OneWay}" Angle="{Binding Orientation, Mode=OneWay}" /> </Path.RenderTransform> </Path> My question is which one is faster? I know I should test it myself but how do I measure the render time of objects with such granularity. I would need to be able to time how long the actual rendering time is for the form, but since I'm not the one that's kicking off the redraw, I don't know how to capture the start time.

    Read the article

  • Website optimization

    - by MB1
    Hi, have can i speed up the loading of images - specialy when i open the website for the first time it takes some time for images to load... Is there anything i can do to improve this (html, css)? link

    Read the article

  • Does my basic PHP Socket Server need optimization?

    - by Tom
    Like many people, I can do a lot of things with PHP. One problem I do face constantly is that other people can do it much cleaner, much more organized and much more structured. This also results in much faster execution times and much less bugs. I just finished writing a basic PHP Socket Server (the real core), and am asking you if you can tell me what I should do different before I start expanding the core. I'm not asking about improvements such as encrypted data, authentication or multi-threading. I'm more wondering about questions like "should I maybe do it in a more object oriented way (using PHP5)?", or "is the general structure of the way the script works good, or should some things be done different?". Basically, "is this how the core of a socket server should work?" In fact, I think that if I just show you the code here many of you will immediately see room for improvements. Please be so kind to tell me. Thanks! #!/usr/bin/php -q <? // config $timelimit = 180; // amount of seconds the server should run for, 0 = run indefintely $address = $_SERVER['SERVER_ADDR']; // the server's external IP $port = 9000; // the port to listen on $backlog = SOMAXCONN; // the maximum of backlog incoming connections that will be queued for processing // configure custom PHP settings error_reporting(1); // report all errors ini_set('display_errors', 1); // display all errors set_time_limit($timelimit); // timeout after x seconds ob_implicit_flush(); // results in a flush operation after every output call //create master IPv4 based TCP socket if (!($master = socket_create(AF_INET, SOCK_STREAM, SOL_TCP))) die("Could not create master socket, error: ".socket_strerror(socket_last_error())); // set socket options (local addresses can be reused) if (!socket_set_option($master, SOL_SOCKET, SO_REUSEADDR, 1)) die("Could not set socket options, error: ".socket_strerror(socket_last_error())); // bind to socket server if (!socket_bind($master, $address, $port)) die("Could not bind to socket server, error: ".socket_strerror(socket_last_error())); // start listening if (!socket_listen($master, $backlog)) die("Could not start listening to socket, error: ".socket_strerror(socket_last_error())); //display startup information echo "[".date('Y-m-d H:i:s')."] SERVER CREATED (MAXCONN: ".SOMAXCONN.").\n"; //max connections is a kernel variable and can be adjusted with sysctl echo "[".date('Y-m-d H:i:s')."] Listening on ".$address.":".$port.".\n"; $time = time(); //set startup timestamp // init read sockets array $read_sockets = array($master); // continuously handle incoming socket messages, or close if time limit has been reached while ((!$timelimit) or (time() - $time < $timelimit)) { $changed_sockets = $read_sockets; socket_select($changed_sockets, $write = null, $except = null, null); foreach($changed_sockets as $socket) { if ($socket == $master) { if (($client = socket_accept($master)) < 0) { echo "[".date('Y-m-d H:i:s')."] Socket_accept() failed, error: ".socket_strerror(socket_last_error())."\n"; continue; } else { array_push($read_sockets, $client); echo "[".date('Y-m-d H:i:s')."] Client #".count($read_sockets)." connected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } } else { $data = @socket_read($socket, 1024, PHP_NORMAL_READ); //read a maximum of 1024 bytes until a new line has been sent if ($data === false) { //the client disconnected $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } else { if ($data = trim($data)) { //remove whitespace and continue only if the message is not empty switch ($data) { case "exit": //close connection when exit command is given $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; break; default: //for experimental purposes, write the given data back socket_write($socket, "\n you wrote: ".$data); } } } } } } socket_close($master); //close the socket echo "[".date('Y-m-d H:i:s')."] SERVER CLOSED.\n"; ?>

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • PHP custom function code optimization

    - by Alex
    Now comes the hard part. How do you optimize this function: function coin_matrix($test, $revs) { $coin = array(); for ($i = 0; $i < count($test); $i++) { foreach ($revs as $j => $rev) { foreach ($revs as $k => $rev) { if ($j != $k && $test[$i][$j] != null && $test[$i][$k] != null) { if(!isset($coin[$test[$i][$j]])) { $coin[$test[$i][$j]] = array(); } if(!isset($coin[$test[$i][$j]][$test[$i][$k]])) { $coin[$test[$i][$j]][$test[$i][$k]] = 0; } $coin[$test[$i][$j]][$test[$i][$k]] += 1 / ($some_var - 1); } } } } return $coin; } I'm not that good at this and if the arrays are large, it runs forever. The function is supposed to find all pairs of values from a two-dim array and sum them like this: $coin[$i][$j] += sum_of_pairs_in_array_row / [count(elements_of_row) - 1] Thanks a lot!

    Read the article

  • C++ iterators & loop optimization

    - by Quantum7
    I see a lot of c++ code that looks like this: for( const_iterator it = list.begin(), const_iterator ite = list.end(); it != ite; ++it) As opposed to the more concise version: for( const_iterator it = list.begin(); it != list.end(); ++it) Will there be any difference in speed between these two conventions? Naively the first will be slightly faster since list.end() is only called once. But since the iterator is const, it seems like the compiler will pull this test out of the loop, generating equivalent assembly for both.

    Read the article

  • Mysql optimization

    - by Jens
    I have this mysql table called comments which looks like this: commentID parentID type userID date comment The commentID is set as Primary key, but most of the time I fetch the data using the parentID. How should I set my indexes? Should I just add an index on parentID and let commentID be the primary key?

    Read the article

  • Constant embedded for loop condition optimization in C++ with gcc

    - by solinent
    Will a compiler optimize tihs: bool someCondition = someVeryTimeConsumingTask(/* ... */); for (int i=0; i<HUGE_INNER_LOOP; ++i) { if (someCondition) doCondition(i); else bacon(i); } into: bool someCondition = someVeryTimeConsumingTask(/* ... */); if (someCondition) for (int i=0; i<HUGE_INNER_LOOP; ++i) doCondition(i); else for (int i=0; i<HUGE_INNER_LOOP; ++i) bacon(i); someCondition is trivially constant within the for loop. This may seem obvious and that I should do this myself, but if you have more than one condition then you are dealing with permuatations of for loops, so the code would get quite a bit longer. I am deciding on whether to do it (I am already optimizing) or whether it will be a waste of my time.

    Read the article

  • MySQL: optimization of table (indexing, foreign key) with no primary keys

    - by Haradzieniec
    Each member has 0 or more orders. Each order contains at least 1 item. memberid - varchar, not integer - that's OK (please do not mention that's not very good, I can't change it). So, thera 3 tables: members, orders and order_items. Orders and order_items are below: CREATE TABLE `orders` ( `orderid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `memberid` VARCHAR( 20 ), `Time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , `info` VARCHAR( 3200 ) NULL , PRIMARY KEY (orderid) , FOREIGN KEY (memberid) REFERENCES members(memberid) ) ENGINE = InnoDB; CREATE TABLE `order_items` ( `orderid` INT(11) UNSIGNED NOT NULL, `item_number_in_cart` tinyint(1) NOT NULL , --- 5 items in cart= 5 rows `price` DECIMAL (6,2) NOT NULL, FOREIGN KEY (orderid) REFERENCES orders(orderid) ) ENGINE = InnoDB; So, order_items table looks like: orderid - item_number_in_cart - price: ... 1000456 - 1 - 24.99 1000456 - 2 - 39.99 1000456 - 3 - 4.99 1000456 - 4 - 17.97 1000457 - 1 - 20.00 1000458 - 1 - 99.99 1000459 - 1 - 2.99 1000459 - 2 - 69.99 1000460 - 1 - 4.99 ... As you see, order_items table has no primary keys (and I think there is no sense to create an auto_increment id for this table, because once we want to extract data, we always extract it as WHERE orderid='1000456' order by item_number_in_card asc - the whole block, id woudn't be helpful in queries). Once data is inserted into order_items, it's not UPDATEd, just SELECTed. The questions are: I think it's a good idea to put index on item_number_in_cart. Could anybody please confirm that? Is there anything else I have to do with order_items to increase the performance, or that looks pretty good? I could miss something because I'm a newbie. Thank you in advance.

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Search Engine Optimization Crucial For Site Page Rank

    Search engine optimization is a process to drive traffic to your blog or sites. Search engines are the best way to give you the traffic that will boost your product sell. And as per the internet marketing is concern the search engine optimization is best way. The reward are numerous but the two that stand out are; you blog will rank higher and you will generate traffic directly proportional to higher selling of your product. For a long time now sitemaps have assisted online business people achieve webpage site optimization.

    Read the article

  • Optimizing Oracle query

    - by Omnipresent
    SELECT MAX(verification_id) FROM VERIFICATION_TABLE WHERE head = 687422 AND mbr = 23102 AND RTRIM(LTRIM(lname)) = '.iq bzw' AND TO_CHAR(dob,'MM/DD/YYYY')= '08/10/2004' AND system_code = 'M'; This query is taking 153 seconds to run. there are millions of rows in VERIFICATION_TABLE. I think query is taking long because of the functions in where clause. However, I need to do ltrim rtrim on the columns and also date has to be matched in MM/DD/YYYY format. How can I optimize this query?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >