Search Results

Search found 3422 results on 137 pages for 'optimization'.

Page 62/137 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • Can I make this two LINQ queries into one query only?

    - by Holli
    From a List of builtAgents I need all items with OptimPriority == 1 and only 5 items with OptimPriority == 0. I do this with two seperate queries but I wonder if I could make this with only one query. IEnumerable<Agent> priorityAgents = from pri in builtAgents where pri.OptimPriority == 1 select pri; IEnumerable<Agent> otherAgents = (from oth in builtAgents where oth.OptimPriority == 0 select oth).Take(5);

    Read the article

  • What algorithm can I use to determine points within a semi-circle?

    - by khayman218
    I have a list of two-dimensional points and I want to obtain which of them fall within a semi-circle. Originally, the target shape was a rectangle aligned with the x and y axis. So the current algorithm sorts the pairs by their X coord and binary searches to the first one that could fall within the rectangle. Then it iterates over each point sequentially. It stops when it hits one that is beyond both the X and Y upper-bound of the target rectangle. This does not work for a semi-circle as you cannot determine an effective upper/lower x and y bounds for it. The semi-circle can have any orientation. Worst case, I will find the least value of a dimension (say x) in the semi-circle, binary search to the first point which is beyond it and then sequentially test the points until I get beyond the upper bound of that dimension. Basically testing an entire band's worth of points on the grid. The problem being this will end up checking many points which are not within the bounds.

    Read the article

  • What to do with a big image that's slowing website loading down significantly

    - by Dave
    Hi I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns. This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time? I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way? Thanks -edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Feedback on Optimizing C# NET Code Block

    - by Brett Powell
    I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • improving drawing pythagoras tree

    - by sasquatch90
    Hello. I have written program for drawing pythagoras tree fractal. Can anybody see any way of improving it ? Now it is 120 LOc. I was hoping to shorten it to ~100... import javax.swing.*; import java.util.Scanner; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import javax.swing.JComponent; public class Main extends JFrame {; public Main(int n) { setSize(900, 900); setTitle("Pythagoras tree"); Draw d = new Draw(n); add(d); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); } private int pow(int n){ int pow = 2; for(int i = 1; i < n; i++){ if(n==0){ pow = 1; } pow = pow*2; } return pow; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.print("Give amount of steps: "); int steps = sc.nextInt(); new Main(steps); } } class Draw extends JComponent { private int height; private int width; private int steps; public Draw(int n) { height = 800; width = 800; steps = n; Dimension d = new Dimension(width, height); setMinimumSize(d); setPreferredSize(new Dimension(d)); setMaximumSize(d); } @Override public void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.white); g.fillRect(0, 0, width, height); g.setColor(Color.black); int w = width; int h = height; int x1, x2, x3, x4, x5, y1, y2, y3, y4, y5; int base = w/7; x1 = (w/2)-(base/2); x2 = x1; x3 = (w/2)+(base/2); x4 = x3; x5 = w/2; y1 = (h-(h/15))-base; y2 = h-(h/15); y3 = y2; y4 = y1; y5 = (h-(h/15))-(base+(base/2)); //paint g.drawLine(x1, y1, x2, y2); g.drawLine(x2, y2, x3, y3); g.drawLine(x3, y3, x4, y4); g.drawLine(x1, y1, x4, y4); int n1 = steps; n1--; if(n1>0){ g.drawLine(x1, y1, x5, y5); g.drawLine(x4, y4, x5, y5); paintMore(n1, g, x1, x5, x4, y1, y5, y4); paintMore(n1, g, x4, x5, x1, y4, y5, y1); } } public void paintMore(int n1, Graphics g, double x1_1, double x2_1, double x3_1, double y1_1, double y2_1, double y3_1){ double x1, x2, x3, x4, x5, y1, y2, y3, y4, y5; //counting x1 = x1_1 + (x2_1-x3_1); x2 = x1_1; x3 = x2_1; x4 = x2_1 + (x2_1-x3_1); x5 = ((x2_1 + (x2_1-x3_1)) + ((x2_1-x3_1)/2)) + ((x1_1-x2_1)/2); y1 = y1_1 + (y2_1-y3_1); y2 = y1_1; y3 = y2_1; y4 = y2_1 + (y2_1-y3_1); y5 = ((y1_1 + (y2_1-y3_1)) + ((y2_1-y1_1)/2)) + ((y2_1-y3_1)/2); //paint g.setColor(Color.green); g.drawLine((int)x1, (int)y1, (int)x2, (int)y2); g.drawLine((int)x3, (int)y3, (int)x4, (int)y4); g.drawLine((int)x1, (int)y1, (int)x4, (int)y4); n1--; if(n1>0){ g.drawLine((int)x1, (int)y1, (int)x5, (int)y5); g.drawLine((int)x4, (int)y4, (int)x5, (int)y5); paintMore(n1, g, x1, x5, x4, y1, y5, y4); paintMore(n1, g, x4, x5, x1, y4, y5, y1); } } }

    Read the article

  • Which field is explain telling me to index?

    - by shady
    I don't understand what this explain statement is saying. Which field needs an index?. The first line to me is confusing because ref is null. Here's the query I'm using: SELECT pp.property_id AS 'good_prop_id', pr.site_number AS 'pr.site_number', CONCAT(pr.site_street_name, ' ', pr.site_street_type) AS 'pr.partial_addr', pr.county FROM realval_newdb.preforeclosures AS pr INNER JOIN realval_newdb.properties_preforeclosures AS pp USE INDEX (mee_id) ON (pr.mee_id = pp.mee_id) INNER JOIN listings_copy AS lc ON (pr.site_number = lc.site_number) AND (lc.site_street_name = CONCAT(pr.site_street_name, ' ', pr.site_street_type)) WHERE lc.site_county = pr.county LIMIT 1; Can anyone help me optimize this query?

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • Windows XP GUI programming language

    - by bobir
    I need to write a Windows XP/Vista application, main requirements: Just one .exe file, without extra runtime, like Air, .Net; posstibly a couple of dlls. Very small file size. The application is for network centric usage, similar to ICQ or Gtalk clients. Thanks in advance.

    Read the article

  • Will unused deconstructors be optimized out?

    - by Brendan Long
    Assuming MyClass uses the default deconstructor (or no deconstructor), and this code: MyClass buffer[] = new MyClass[i]; // Construct N objects using placement new for(size_t i = 0; i < N; i++){ ~buffer[i]; } delete[] buffer; Is there any optimizer that would be able to remove this loop? Also, is there any way for my code to detect if MyClass is using an empty/default constructor?

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >