Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 76/307 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Optimizing Levenshtein Distance Algorithm

    - by Matt
    I have a stored procedure that uses Levenshtein Distance to determine the result closest to what the user typed. The only thing really affecting the speed is the function that calculates the Levenshtein Distance for all the records before selecting the record with the lowest distance (I've verified this by putting a 0 in place of the call to the Levenshtein function). The table has 1.5 million records, so even the slightest adjustment may shave off a few seconds. Right now the entire thing runs over 10 minutes. Here's the method I'm using: ALTER function dbo.Levenshtein ( @Source nvarchar(200), @Target nvarchar(200) ) RETURNS int AS BEGIN DECLARE @Source_len int, @Target_len int, @i int, @j int, @Source_char nchar, @Dist int, @Dist_temp int, @Distv0 varbinary(8000), @Distv1 varbinary(8000) SELECT @Source_len = LEN(@Source), @Target_len = LEN(@Target), @Distv1 = 0x0000, @j = 1, @i = 1, @Dist = 0 WHILE @j <= @Target_len BEGIN SELECT @Distv1 = @Distv1 + CAST(@j AS binary(2)), @j = @j + 1 END WHILE @i <= @Source_len BEGIN SELECT @Source_char = SUBSTRING(@Source, @i, 1), @Dist = @i, @Distv0 = CAST(@i AS binary(2)), @j = 1 WHILE @j <= @Target_len BEGIN SET @Dist = @Dist + 1 SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j-1, 2) AS int) + CASE WHEN @Source_char = SUBSTRING(@Target, @j, 1) THEN 0 ELSE 1 END IF @Dist > @Dist_temp BEGIN SET @Dist = @Dist_temp END SET @Dist_temp = CAST(SUBSTRING(@Distv1, @j+@j+1, 2) AS int)+1 IF @Dist > @Dist_temp SET @Dist = @Dist_temp BEGIN SELECT @Distv0 = @Distv0 + CAST(@Dist AS binary(2)), @j = @j + 1 END END SELECT @Distv1 = @Distv0, @i = @i + 1 END RETURN @Dist END Anyone have any ideas? Any input is appreciated. Thanks, Matt

    Read the article

  • MySQL Query performance - huge difference in time

    - by Damo
    I have a query that is returning in vastly different amounts of time between 2 datasets. For one set (database A) it returns in a few seconds, for the other (database B)....well I haven't waited long enough yet, but over 10 minutes. I have dumped both of these databases to my local machine where I can reproduce the issue running MySQL 5.1.37. Curiously, database B is smaller than database A. A stripped down version of the query that reproduces the problem is: SELECT * FROM po_shipment ps JOIN po_shipment_item psi USING (ship_id) JOIN po_alloc pa ON ps.ship_id = pa.ship_id AND pa.UID_items = psi.UID_items JOIN po_header ph ON pa.hdr_id = ph.hdr_id LEFT JOIN EVENT_TABLE ev0 ON ev0.TABLE_ID1 = ps.ship_id AND ev0.EVENT_TYPE = 'MAS0' LEFT JOIN EVENT_TABLE ev1 ON ev1.TABLE_ID1 = ps.ship_id AND ev1.EVENT_TYPE = 'MAS1' LEFT JOIN EVENT_TABLE ev2 ON ev2.TABLE_ID1 = ps.ship_id AND ev2.EVENT_TYPE = 'MAS2' LEFT JOIN EVENT_TABLE ev3 ON ev3.TABLE_ID1 = ps.ship_id AND ev3.EVENT_TYPE = 'MAS3' LEFT JOIN EVENT_TABLE ev4 ON ev4.TABLE_ID1 = ps.ship_id AND ev4.EVENT_TYPE = 'MAS4' LEFT JOIN EVENT_TABLE ev5 ON ev5.TABLE_ID1 = ps.ship_id AND ev5.EVENT_TYPE = 'MAS5' WHERE ps.eta >= '2010-03-22' GROUP BY ps.ship_id LIMIT 100; The EXPLAIN query plan for the first database (A) that returns in ~2 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 174 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_PROD.ps.ship_id,const | 1 | | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_PROD.ps.ship_id | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | FK_po_alloc_po_shipment1 | 4 | UNIVIS_PROD.psi.ship_id | 5 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_PROD.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+------------------------------+------+----------------------------------------------+ The EXPLAIN query plan for the second database (B) that returns in 600 seconds is: +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | ps | range | PRIMARY,IX_ETA_DATE | IX_ETA_DATE | 4 | NULL | 38 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | psi | ref | PRIMARY,IX_po_shipment_item_po_shipment1,FK_po_shipment_item_po_shipment1 | IX_po_shipment_item_po_shipment1 | 4 | UNIVIS_DEV01.ps.ship_id | 1 | | | 1 | SIMPLE | ev0 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev1 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev2 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | ev3 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev4 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.psi.ship_id,const | 1 | | | 1 | SIMPLE | ev5 | ref | IX_EVENT_ID_EVENT_TYPE | IX_EVENT_ID_EVENT_TYPE | 36 | UNIVIS_DEV01.ps.ship_id,const | 1 | | | 1 | SIMPLE | pa | ref | IX_po_alloc_po_shipment_item2,IX_po_alloc_po_details_old,FK_po_alloc_po_shipment1,FK_po_alloc_po_shipment_item1,FK_po_alloc_po_header1 | IX_po_alloc_po_shipment_item2 | 4 | UNIVIS_DEV01.ps.ship_id | 4 | Using where | | 1 | SIMPLE | ph | eq_ref | PRIMARY,IX_HDR_ID | PRIMARY | 4 | UNIVIS_DEV01.pa.hdr_id | 1 | | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+--------------------------------+------+----------------------------------------------+ When database B is running I can look at the MySQL Administrator and the state remains at "Copying to tmp table" indefinitely. Database A also has this state but for only a second or so. There are no differences in the table structure, indexes, keys etc between these databases (I have done show create tables and diff'd them). The sizes of the tables are: database A: po_shipment 1776 po_shipment_item 1945 po_alloc 36298 po_header 71642 EVENT_TABLE 1608 database B: po_shipment 463 po_shipment_item 470 po_alloc 3291 po_header 56149 EVENT_TABLE 1089 Some points to note: Removing the WHERE clause makes the query return < 1 sec. Removing the GROUP BY makes the query return < 1 sec. Removing ev5, ev4, ev3 etc makes the query get faster for each one removed. Can anyone suggest how to resolve this issue? What have I missed? Many Thanks.

    Read the article

  • Speed improvements for Perl's chameneos-redux in the Computer Language Benchmarks Game

    - by Robert P
    Ever looked at the Computer Language Benchmarks Game (formerly known as the Great Language Shootout)? Perl has some pretty healthy competition there at the moment. It also occurs to me that there's probably some places that Perl's scores could be improved. The biggest one is in the chameneos-redux script right now—the Perl version runs the worst out of any language: 1,626 times slower than the C baseline solution! There are some restrictions on how the programs can be made and optimized, and there is Perl's interpreted runtime penalty, but 1,626 times? There's got to be something that can get the runtime of this program way down. Taking a look at the source code and the challenge, how can the speed be improved?

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • How to write the Visitor Pattern for Abstract Syntax Tree in Python?

    - by bodacydo
    My collegue suggested me to write a visitor pattern to navigate the AST. Can anyone tell me more how would I start writing it? As far as I understand, each Node in AST would have visit() method (?) that would somehow get called (from where?). That about concludes my understanding. To simplify everything, suppose I have nodes Root, Expression, Number, Op and the tree looks like this: Root | Op(+) / \ / \ Number(5) \ Op(*) / \ / \ / \ Number(2) Number(444) Can anyone think of how the visitor pattern would visit this tree to produce output: 5 + 2 * 444 Thanks, Boda Cydo.

    Read the article

  • How to optimize this MySQL query

    - by James Simpson
    This query was working fine when the database was small, but now that there are millions of rows in the database, I am realizing I should have looked at optimizing this earlier. It is looking at over 600,000 rows and is Using where; Using temporary; Using filesort (which leads to an execution time of 5-10 seconds). It is using an index on the field 'battle_type.' SELECT username, SUM( outcome ) AS wins, COUNT( * ) - SUM( outcome ) AS losses FROM tblBattleHistory WHERE battle_type = '0' && outcome < '2' GROUP BY username ORDER BY wins DESC , losses ASC , username ASC LIMIT 0 , 50

    Read the article

  • Commands to compile programs using .NET

    - by Arjun Vasudevan
    In case I have .NET framework installed in my computer + all the necessary other language support (Perl Interpreter, etc) What are the commands I should give in the console to compile programs in the following languages: 1. C 2. C++ 3. Java 4. Python 5. VB 6. C# 7. Perl 8. Ruby Like we have for VB- *vbc program_name.vb*, what are the commands to compile programs in other languages?

    Read the article

  • Database design in blogging systems

    - by Peter
    As a learning exercise I'm trying to put myself a blogging system. The goal is to code something that will let me create multiple blogs, like blogger.com or wordpress.com, but much simplified. I would like to ask you, what do you think is best database design for this type of script. Is it better to have one big table, containing posts from all blogs of all users (like friendfeed) or would it be better to create separate table for each blog's posts? Big thanks in advance for your help, Peter.

    Read the article

  • Ever any performance different between Java >> and >>> right shift operators?

    - by Sean Owen
    Is there ever reason to think the (signed) and (unsigned) right bit-shift operators in Java would perform differently? I can't detect any difference on my machine. This is purely an academic question; it's never going to be the bottleneck I'm sure. I know: it's best to write what you mean foremost; use for division by 2, for example. I assume it comes down to which architectures have which operations implemented as an instruction.

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • Can you precompile and merge part of an ASP.NET website and then continue development?

    - by michielvoo
    A big part of the web site is precompiled and merged, since it's almost never going to change. The precompiled bits can be replaced in case of updates to the original. I want to continue development of new pages, but when I browse to a new page I get the following error: The file '/Website/Test/Default.aspx' has not been pre-compiled, and cannot be requested. Is there any way around this? Edit: If I remove the precompileApp.config file I get the contents of the marker files when I browse them: This is a marker file generated by the precompilation tool, and should not be deleted!

    Read the article

  • MySQL queries - how expensive are they really?

    - by incrediman
    I've heard that mysql queries are very expensive, and that you should avoid at all costs making too many of them. I'm developing a site that will be used by quite a few people, and I'm wondering: How expensive are mysql queries actually? If I have 400,000 people in my database, how expensive is it to query it for one of them? How close attention do I need to pay that I don't make too many queries per client request?

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • Java JUnit: The method X is ambiguous for type Y

    - by Rosarch
    I had some tests working fine. Then, I moved it to a different package, and am now getting errors. Here is the code: import static org.junit.Assert.*; import java.util.HashSet; import java.util.Map; import java.util.Set; import org.jgrapht.Graphs; import org.jgrapht.WeightedGraph; import org.jgrapht.graph.DefaultWeightedEdge; import org.jgrapht.graph.SimpleWeightedGraph; import org.junit.*; @Test public void testEccentricity() { WeightedGraph<String, DefaultWeightedEdge> g = generateSimpleCaseGraph(); Map<String, Double> eccen = JGraphtUtilities.eccentricities(g); assertEquals(70, eccen.get("alpha")); assertEquals(80, eccen.get("l")); assertEquals(130, eccen.get("l-0")); assertEquals(100, eccen.get("l-1")); assertEquals(90, eccen.get("r")); assertEquals(120, eccen.get("r-0")); assertEquals(130, eccen.get("r-1")); } The error message is this: The method assertEquals(Object, Object) is ambiguous for the type JGraphtUtilitiesTest How can I fix this? Why did this problem occur as I moved the class to a different package?

    Read the article

  • 3 dimensional bin packing algorithms

    - by BuschnicK
    I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf solutions exist to conduct some experiments with?

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • identation control while developing a small python like language

    - by sap
    Hello, Im developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for identation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop thats inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 end end #after break 2 it would jump right here but since i dont have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: i would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then everytime i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the identation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesnt use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appretiated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on google and had no luck). thanks in advance.

    Read the article

  • ANTLR lexer mismatches tokens

    - by Barry Brown
    I have a simple ANTLR grammar, which I have stripped down to its bare essentials to demonstrate this problem I'm having. I am using ANTLRworks 1.3.1. grammar sample; assignment : IDENT ':=' NUM ';' ; IDENT : ('a'..'z')+ ; NUM : ('0'..'9')+ ; WS : (' '|'\n'|'\t'|'\r')+ {$channel=HIDDEN;} ; Obviously, this statement is accepted by the grammar: x := 99; But this one also is: x := @!$()()%99***; Output from the ANTLRworks Interpreter: What am I doing wrong? Even other sample grammars that come with ANTLR (such as the CMinus grammar) exhibit this behavior.

    Read the article

  • Mysql - Help me change this single complex query to use temporary tables

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query Can anybody help me suggest an approach using temporary tables. We have indexed all the relevant fields and it looks like this is the least time possible with this approach:- SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%Dictatorship%" )) AS key_1_total_matches , SUM(DISTINCT( t.tag LIKE "%democracy%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%democracy%" )) AS key_2_total_matches , COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published' , CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN tags AS t ON t.id_tag = ttagrels.id_tag LEFT JOIN tags AS tt ON tt.id_tag = lptagrels.id_tag LEFT JOIN tags AS ttt ON ttt.id_tag = wtagrels.id_tag WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( tt.id_tag = lptagrels.id_tag ) AND ( lp.id_lp > 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( ttt.id_tag = wtagrels.id_tag ) AND ( wc.id_wc > 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od > 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc > 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%" OR tt.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%democracy%" ) GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 10 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. Somebody suggested in my previous question to do the following:- create a temporary table and insert here all relevant data that might end up in the final result set run several updates on this table, joining the required tables one at a time instead of all of them at the same time finally perform a query on this temporary table to extract the end result All this was done in a stored procedure, the end result has passed unit tests, and is blazing fast. I have never worked with temporary tables till now. Only if I could get some hints, kind of schematic representations so that I can start with... Is there something faulty with the query? What can be the reason behind 10+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query_improved.jpg

    Read the article

  • C# Improvement on a Fire-and-Forget

    - by adam
    Greetings I have a program that creates multiples instances of a class, runs the same long-running Update method on all instances and waits for completion. I'm following Kev's approach from this question of adding the Update to ThreadPool.QueueUserWorkItem. In the main prog., I'm sleeping for a few minutes and checking a Boolean in the last child to see if done while(!child[child.Length-1].isFinished){ Thread.Sleep(...); } This solution is working the way I want, but is there a better way to do this? Both for the independent instances and checking if all work is done. Thanks

    Read the article

  • Explanation of Pingdom Results

    - by Computer Guru
    Hi, I'm trying to optimize my page load times, and I'm using Pingdom to test the site response times. However, I'm not exactly sure what the various components of the "time bar" mean. Example link: http://tools.pingdom.com/fpt/?url=http://neosmart.net/forums//&id=2230361 According to them, the portion of the bar that is yellow is the time between "start" and "connect" and the portion of the bar that is green is the time between "connect" and "first byte" with the blue section being the actual transfer time (time between "first byte" and "last byte"). If I'm trying to the first two (which take very long in my case), what's the recommended course of action? Thanks.

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >