Search Results

Search found 3568 results on 143 pages for 'markdown optimization'.

Page 66/143 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • glDrawArrays() slow on iPad?

    - by Nick
    Hey guys, I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app? For instances, if we now draw 1 texture across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows: - (void) draw { GLuint textureAvailable = 0; if(texture != nil){ textureAvailable = 1; } glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture.name); glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha); glEnableVertexAttribArray(ATTRIB_COLOR); glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping); glEnableVertexAttribArray(ATTRIB_TEXTUREMAP); //Note that we are NOT using position.z here because that is only used to determine drawing order int *jnUniforms = JNOpenGLConstants::getInstance().uniforms; glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0); glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0); glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation); glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0); glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y); glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); }

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • STL vectors with uninitialized storage?

    - by Jim Hunziker
    I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values. Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements? (I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.) Also, see my comments below for a clarification about when the initialization occurs. SOME CODE: void GetsCalledALot(int* data1, int* data2, int count) { int mvSize = memberVector.size() memberVector.resize(mvSize + count); // causes 0-initialization for (int i = 0; i < count; ++i) { memberVector[mvSize + i].d1 = data1[i]; memberVector[mvSize + i].d2 = data2[i]; } }

    Read the article

  • Is "for(;;)" faster than "while (TRUE)"? If not, why do people use it?

    - by Chris Cooper
    for (;;) { //Something to be done repeatedly } I have seen this sort of thing used a lot, but I think it is rather strange... Wouldn't it be much clearer to say while (TRUE), or something along those lines? I'm guessing that (as is the reason for many-a-programmer to resort to cryptic code) this is a tiny margin faster? Why, and is it REALLY worth it? If so, why not just define it this way: #DEFINE while(TRUE) for(;;)

    Read the article

  • optimize python code

    - by user283405
    i have code that uses BeautifulSoup library for parsing. But it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me about this? I am using beautifulsoup library for parsing and than save in DB. if i comment the save statement, than still it takes time so there is no problem with database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... #... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • PHP error handling : my code is not optimized

    - by Tristan
    Hello, I must warn you, this code will heart your eyes, so please don't judge me, i'm trying to improve the way I handle errors all my tests are like this : if ($something < 27) { $error_IP= '<div class="error_message">something bad</div> '; }else{ $erreur_IP=''; } and here's the ugliest thing : if( !isset($_POST) || ($erreur_captcha !='') || ($erreur_email !='') || ($erreur_hebergeurVide != '') || ($erreur_paysVide != '') || ($erreur_slotVide != '') || ($erreur_rconVide != '') || ($erreur_tick != '') + a lot more :d ) What do you suggest to me to optimize my errors handling ? Thank you

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • How to simplify my code... 2D array in Objective C...?

    - by Tattat
    self.myArray = [NSArray arrayWithObjects: [NSArray arrayWithObjects: [self d], [self generateMySecretObject],nil], [NSArray arrayWithObjects: [self generateMySecretObject], [self generateMySecretObject],nil],nil]; for (int k=0; k<[self.myArray count]; k++) { for(int s = 0; s<[[self.myArray objectAtIndex:k] count]; s++){ [[[self.myArray objectAtIndex:k] objectAtIndex:s] setAttribute:[self generateSecertAttribute]]; } } As you can see this is a simple 2*2 array, but it takes me lots of code to assign the NSArray in very first place, because I found that the NSArray can't assign the size at very beginning. Also, I want to set attribute one by one. I can't think of if my array change to 10*10. How long it could be. So, I hope you guys can give me some suggestions on shorten the code, and more readable. thz

    Read the article

  • 50 million+ Rows of Data - CSV or MySQL

    - by eWizardII
    Hello, I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial "seed" set the 50million I use this as the first values in my queue. Thanks,

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Will unused deconstructors be optimized out?

    - by Brendan Long
    Assuming MyClass uses the default deconstructor (or no deconstructor), and this code: MyClass buffer[] = new MyClass[i]; // Construct N objects using placement new for(size_t i = 0; i < N; i++){ ~buffer[i]; } delete[] buffer; Is there any optimizer that would be able to remove this loop? Also, is there any way for my code to detect if MyClass is using an empty/default constructor?

    Read the article

  • PHP Increasing writing to page speed.

    - by Frederico
    I'm currently writing out xml and have done the following: header ("content-type: text/xml"); header ("content-length: ".strlen($xml)); $xml being the xml to be written out. I'm near about 1.8 megs of text (which I found via firebug), it seems as the writing is taking more time than the script to run.. is there a way to increase this write speed? Thank you in advance.

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >