Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 94/307 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Python Profiling In Windows, How do you ignore Builtin Functions

    - by Tim McJilton
    I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize? I hope that makes sense, any light you can shed on this subject would be very appreciated.

    Read the article

  • Ruby Challenge - efficiently change the last character of every word in a sentence to a capital

    - by emson
    Hi All I recently was challenged to write some Ruby code to change the last character of every word in a sentence into a capital. Such that the string: "script to convert the last letter of every word to a capital" becomes "scripT tO converT thE lasT letteR oF everY worD tO A capitaL" This was my optimal solution however I'm sure you wizards have much better solutions and I would be really interested to hear them. "script to convert the last letter of every word to a capital".split.map{|w|w<<w.slice!(-1).chr.upcase}.join' ' For those interested as to what is going on here is an explanation. split will split the sentence up into an array, the default delimiter is a space and with Ruby you don't need to use brackets here. map the array from split is passed to map which opens a block and process each word (w) in the array. the block slice!(s) off the last character of the word and converts it to a chr (a character not ASCII code) and then capitalises upcase it. This character is now appended << to the word which is missing the sliced last letter. Finally the array of words is now join together with a ' ' to reform the sentence. Enjoy

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Getting the errors for code in unopened .aspx pages

    - by Glennular
    Is there a way to check for errors in unopened *.ASPX pages. For example, if you change the name of a function Visual Studio will catch the error on the page and list it in the "Error List" only if the page is opened and being validated? I guess the question could be is there a validation option opposed to the compile option to check for errors? (Yes, i know code should go into the pre-compiled code-behind pages.) How do i find out about the following without running the page through the webserver or opening the page to be validated in VS? <script runat="server"> Public Sub MyFunciton() Undefined_FUNCTION() End Sub </script>

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • implementing type inference

    - by deepblue
    well I see some interesting discussions here about static vs. dynamic typing I generally prefer static typing, due to compile type checking, better documented code,etc. However I do agree that they do clutter up the code if done the way Java does it, for example. so Im about to start building a language of my own and type inference is one of the things that I want to implement, in a functional style language... I do understand that it is a big subject, and Im not trying to create something that has not been done before, just basic inferencing... any pointers on what to read up that will help me with this? preferably something more pragmatic/practical as oppose to more theoretical category theory/type theory texts. If there's a implementation discussion text out here, with data structures/algorithms, that would just be lovely much appreciated

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Can I make this two LINQ queries into one query only?

    - by Holli
    From a List of builtAgents I need all items with OptimPriority == 1 and only 5 items with OptimPriority == 0. I do this with two seperate queries but I wonder if I could make this with only one query. IEnumerable<Agent> priorityAgents = from pri in builtAgents where pri.OptimPriority == 1 select pri; IEnumerable<Agent> otherAgents = (from oth in builtAgents where oth.OptimPriority == 0 select oth).Take(5);

    Read the article

  • Is "for(;;)" faster than "while (TRUE)"? If not, why do people use it?

    - by Chris Cooper
    for (;;) { //Something to be done repeatedly } I have seen this sort of thing used a lot, but I think it is rather strange... Wouldn't it be much clearer to say while (TRUE), or something along those lines? I'm guessing that (as is the reason for many-a-programmer to resort to cryptic code) this is a tiny margin faster? Why, and is it REALLY worth it? If so, why not just define it this way: #DEFINE while(TRUE) for(;;)

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • FMOD Compiling trouble

    - by CptAJ
    I'm trying to get started with FMOD but I'm having some issues compiling the example code in this tutorial: http://www.gamedev.net/reference/articles/article2098.asp I'm using MinGW, I placed the libfmodex.a file in MinGW's include folder (I also tried linking directly to the filename) but it doesn't work. Here's the output. C:\>g++ -o test1.exe test1.cpp -lfmodex test1.cpp:4:1: error: 'FSOUND_SAMPLE' does not name a type test1.cpp: In function 'int main()': test1.cpp:9:29: error: 'FSOUND_Init' was not declared in this scope test1.cpp:12:4: error: 'handle' was not declared in this scope test1.cpp:12:53: error: 'FSOUND_Sample_Load' was not declared in this scope test1.cpp:13:30: error: 'FSOUND_PlaySound' was not declared in this scope test1.cpp:21:30: error: 'FSOUND_Sample_Free' was not declared in this scope test1.cpp:22:17: error: 'FSOUND_Close' was not declared in this scope This is the particular example I'm using: #include <conio.h> #include "inc/fmod.h" FSOUND_SAMPLE* handle; int main () { // init FMOD sound system FSOUND_Init (44100, 32, 0); // load and play sample handle=FSOUND_Sample_Load (0,"sample.mp3",0, 0, 0); FSOUND_PlaySound (0,handle); // wait until the users hits a key to end the app while (!_kbhit()) { } // clean up FSOUND_Sample_Free (handle); FSOUND_Close(); } I have the header files in the "inc" path where my code is. Any ideas as to what I'm doing wrong?

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • improving drawing pythagoras tree

    - by sasquatch90
    Hello. I have written program for drawing pythagoras tree fractal. Can anybody see any way of improving it ? Now it is 120 LOc. I was hoping to shorten it to ~100... import javax.swing.*; import java.util.Scanner; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import javax.swing.JComponent; public class Main extends JFrame {; public Main(int n) { setSize(900, 900); setTitle("Pythagoras tree"); Draw d = new Draw(n); add(d); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setVisible(true); } private int pow(int n){ int pow = 2; for(int i = 1; i < n; i++){ if(n==0){ pow = 1; } pow = pow*2; } return pow; } public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.print("Give amount of steps: "); int steps = sc.nextInt(); new Main(steps); } } class Draw extends JComponent { private int height; private int width; private int steps; public Draw(int n) { height = 800; width = 800; steps = n; Dimension d = new Dimension(width, height); setMinimumSize(d); setPreferredSize(new Dimension(d)); setMaximumSize(d); } @Override public void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.white); g.fillRect(0, 0, width, height); g.setColor(Color.black); int w = width; int h = height; int x1, x2, x3, x4, x5, y1, y2, y3, y4, y5; int base = w/7; x1 = (w/2)-(base/2); x2 = x1; x3 = (w/2)+(base/2); x4 = x3; x5 = w/2; y1 = (h-(h/15))-base; y2 = h-(h/15); y3 = y2; y4 = y1; y5 = (h-(h/15))-(base+(base/2)); //paint g.drawLine(x1, y1, x2, y2); g.drawLine(x2, y2, x3, y3); g.drawLine(x3, y3, x4, y4); g.drawLine(x1, y1, x4, y4); int n1 = steps; n1--; if(n1>0){ g.drawLine(x1, y1, x5, y5); g.drawLine(x4, y4, x5, y5); paintMore(n1, g, x1, x5, x4, y1, y5, y4); paintMore(n1, g, x4, x5, x1, y4, y5, y1); } } public void paintMore(int n1, Graphics g, double x1_1, double x2_1, double x3_1, double y1_1, double y2_1, double y3_1){ double x1, x2, x3, x4, x5, y1, y2, y3, y4, y5; //counting x1 = x1_1 + (x2_1-x3_1); x2 = x1_1; x3 = x2_1; x4 = x2_1 + (x2_1-x3_1); x5 = ((x2_1 + (x2_1-x3_1)) + ((x2_1-x3_1)/2)) + ((x1_1-x2_1)/2); y1 = y1_1 + (y2_1-y3_1); y2 = y1_1; y3 = y2_1; y4 = y2_1 + (y2_1-y3_1); y5 = ((y1_1 + (y2_1-y3_1)) + ((y2_1-y1_1)/2)) + ((y2_1-y3_1)/2); //paint g.setColor(Color.green); g.drawLine((int)x1, (int)y1, (int)x2, (int)y2); g.drawLine((int)x3, (int)y3, (int)x4, (int)y4); g.drawLine((int)x1, (int)y1, (int)x4, (int)y4); n1--; if(n1>0){ g.drawLine((int)x1, (int)y1, (int)x5, (int)y5); g.drawLine((int)x4, (int)y4, (int)x5, (int)y5); paintMore(n1, g, x1, x5, x4, y1, y5, y4); paintMore(n1, g, x4, x5, x1, y4, y5, y1); } } }

    Read the article

  • <function> referenced from; symbol(s) not found.

    - by jfm429
    I have a piece of C code that is used from a C++ function. At the top of my C++ file I have the line: #include "prediction.h" In prediction.h I have this: #ifndef prediction #define prediction #include "structs.h" typedef struct { double estimation; double variance; } response; response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions); #endif I also have prediction.c, which has: #include "prediction.h" response runPrediction(int obs, location* positions, double* observations, int targets, location* targetPositions) { // code here } Now, in my C++ file (which as I said includes prediction.h) I call that function, then compile (through Xcode) I get this error: "runPrediction(int, location*, double*, int, location*)", referenced from: mainFrame::respondTo(char*, int)in mainFrame.o ld: symbol(s) not found collect2: ld returned 1 exit status prediction.c is marked for compilation for the current target. I don't have any problems with other .cpp files not being compiled. Any thoughts here?

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • "variable tracking" is eating my compile time!

    - by wowus
    I have an auto-generated file which looks something like this... static void do_SomeFunc1(void* parameter) { // Do stuff. } // Continues on for another 4000 functions... void dispatch(int id, void* parameter) { switch(id) { case ::SomeClass1::id: return do_SomeFunc1(parameter); case ::SomeClass2::id: return do_SomeFunc2(parameter); // This continues for the next 4000 cases... } } When I build it like this, the build time is enormous. If I inline all the functions automagically into their respective cases using my script, the build time is cut in half. GCC 4.5.0 says ~50% of the build time is being taken up by "variable tracking" when I use -ftime-report. What does this mean and how can I speed compilation while still maintaining the superior cache locality of pulling out the functions from the switch? EDIT: Interestingly enough, the build time has exploded only on debug builds, as per the following profiling information of the whole project (which isn't just the file in question, but still a good metric; the file in question takes the most time to build): Debug: 8 minutes 50 seconds Release: 4 minutes, 25 seconds

    Read the article

  • What is the .NET attribuet to not compile a method is release mode

    - by Russ
    I know that if I have a block of code I don't want compiled when in release mode I can wrap that code block in: #if DEBUG while(true) { Console.WriteLine("StackOverflow rules"); } #endif This will keep this code block from compiling in any mode other than DEBUG. I know there is an attribute that can be placed on an entire method that will do that same, but for the life of me I can't remember what that attribute is. I believe that it’s down the System.Diagnostics namespace, but I'm not really sure. BTW: I'm using .NET 4, but I know this attribute existed in .NET 2 because I have used in in old projects. Thanks

    Read the article

  • How to increase query speed without using full-text search?

    - by andre matos
    This is my simple query; By searching selectnothing I'm sure I'll have no hits. SELECT nome_t FROM myTable WHERE nome_t ILIKE '%selectnothing%'; This is the EXPLAIN ANALYZE VERBOSE Seq Scan on myTable (cost=0.00..15259.04 rows=37 width=29) (actual time=2153.061..2153.061 rows=0 loops=1) Output: nome_t Filter: (nome_t ~~* '%selectnothing%'::text) Total runtime: 2153.116 ms myTable has around 350k rows and the table definition is something like: CREATE TABLE myTable ( nome_t text NOT NULL, ) I have an index on nome_t as stated below: CREATE INDEX idx_m_nome_t ON myTable USING btree (nome_t); Although this is clearly a good candidate for Fulltext search I would like to rule that option out for now. This query is meant to be run from a web application and currently it's taking around 2 seconds which is obviously too much; Is there anything I can do, like using other index methods, to improve the speed of this query?

    Read the article

  • va_list has not been declared

    - by David Doria
    When compiling some working code on Fedora 11, I am getting this error: /usr/include/c++/4.4.1/cstdarg:56: error: ‘::va_list’ has not been declared I am using: [doriad@davedesktop VTK]$ g++ --version g++ (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2) Does anyone know what the problem could be? Thanks, Dave

    Read the article

  • Compilation problem in the standard x86_64 libraries

    - by user350282
    Hi everyone, I am having trouble compiling a program I have written. I have two different files with the same includes but only one generates the following error when compiled with g++ /usr/lib/gcc/x86_64-linux-gnu/4.4.1/../../../../lib/crt1.o: In function `_start': /build/buildd/eglibc-2.10.1/csu/../sysdeps/x86_64/elf/start.S:109: undefined reference to `main' collect2: ld returned 1 exit status The files I am including in my header are as follows: #include <google/sparse_hash_map> using google::sparse_hash_map; #include <ext/hash_map> #include <math.h> #include <iostream> #include <queue> #include <vector> #include <stack> using std::priority_queue; using std::stack; using std::vector; using __gnu_cxx::hash_map; using __gnu_cxx::hash; using namespace std; Searching the internet for those two lines hasn't resulted in anything to help me. I would be very grateful for any advice. Thank you

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >