Search Results

Search found 2480 results on 100 pages for 'dave jarvis'.

Page 28/100 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Optimal two variable linear regression calculation

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT, FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < 15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Question The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Related Sites Least absolute deviations Robust regression Thank you!

    Read the article

  • Unchecked call to compareTo

    - by Dave Jarvis
    Background Create a Map that can be sorted by value. Problem The code executes as expected, but does not compile cleanly: http://pastebin.com/bWhbHQmT The syntax for passing Comparable as a generic parameter along to the Map.Entry<K, V> (where V must be Comparable?) -- so that the (Comparable) typecast shown in the warning can be dropped -- eludes me. Warning Compiler's cantankerous complaint: SortableValueMap.java:24: warning: [unchecked] unchecked call to compareTo(T) as a member of the raw type java.lang.Comparable return ((Comparable)entry1.getValue()).compareTo( entry2.getValue() ); Question How can the code be changed to compile without any warnings (without suppressing them while compiling with -Xlint:unchecked)? Related TreeMap sort by value How to sort a Map on the values in Java? http://paaloliver.wordpress.com/2006/01/24/sorting-maps-in-java/ Thank you!

    Read the article

  • Web framework with JasperReports integration?

    - by Dave Jarvis
    What web development frameworks natively support JasperReports? Consider the following form as an example: <form name="report" method="post"> <input type="hidden" name="REPORT_PATH" value="reports/Names" /> <input type="hidden" name="REPORT_FILE" value="List" /> <input type="hidden" name="REPORT_FORMAT" value="pdf" /> <input type="hidden" name="REPORT_EMBED" value="false" /> Name: <input type="text" name="report_Name" value="" /><br /> Date: <input type="text" name="report_Date" value="" /><br /> <input type="submit" name="View" value="View" /> </form> The framework would pass the report_ parameters to JasperReports, which in turn runs reports/Names/List.jasper, and then sends a PDF attachment to the browser. In general: Ability to configure report (i.e., the hidden REPORT_ variables) Web FORM for setting report parameters (i.e., the report_ variables) Framework handles configuring database connection, report execution, etc. I don't care about the technical minutia on how the integration works. The example above is just one possibility of many.

    Read the article

  • Get latitude and longitude using map interface

    - by Dave Jarvis
    Problem Allow website users to enter four latitude and longitude coordinates. Proposed Solution Integrate Google Maps API, and add a click event handler, similar to: http://itouchmap.com/latlong.html http://stackoverflow.com/questions/2770421/how-retrieve-latitude-and-longitude-via-google-maps-api http://marcgrabanski.com/article/jquery-google-maps-tutorial-basics The data would populate into a hidden form field. Questions What other ways (besides <input type='text' ... />) outside of Google's API are available to solve the problem? How would you restrict the number of lat/long points the user can choose? Would using those coordinates violate Google's Terms of Service? Thank you!

    Read the article

  • Sort latitude and longitude coordinates into clockwise quadrangle

    - by Dave Jarvis
    Problem Users can provide up to four latitude and longitude coordinates, in any order. They do so with Google Maps. Using Google's Polygon API (v3), the coordinates they select should highlight the selected area between the four coordinates. Solutions and Searches http://stackoverflow.com/questions/242404/sort-four-points-in-clockwise-order Graham's scan seems too complicated for four coordinates Sort the coordinates into two arrays (one by latitude, the other longitude) ... then? Question How do you sort the coordinates in (counter-)clockwise order, using JavaScript? Code Here is what I have so far: // Ensures the markers are sorted: NW, NE, SE, SW function sortMarkers() { var ns = markers.slice( 0 ); var ew = markers.slice( 0 ); ew.sort( function( a, b ) { if( a.lat() < b.lat() ) { return -1; } else if( a.lat() > b.lat() ) { return 1; } return 0; }); ns.sort( function( a, b ) { if( a.lng() < b.lng() ) { return -1; } else if( a.lng() > b.lng() ) { return 1; } return 0; }); } What is a better approach? Thank you.

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Calculate the year for ending month/day?

    - by Dave Jarvis
    Given: Start Year Start Month & Start Day End Month & End Day What SQL statement results in TRUE if a date lands between the Start and End days? 1st example: Start Date = 11-22 End Date = 01-17 Start Year = 2009 Specific Date = 2010-01-14 TRUE 2nd example: Start Date = 11-22 End Date = 11-16 Start Year = 2009 Specific Date = 2010-11-20 FALSE 3rd example: Start Date = 02-25 End Date = 03-19 Start Year = 2004 Specific Date = 2004-02-29 TRUE I was thinking of using the MySQL functions datediff and sign plus a CASE condition to determine whether the year wraps, but it seems rather expensive. Am looking for a simple, efficient calculation. Update 1 The problem is the end date cannot simply use the year. The year must be increased if the end month/day combination happens before the start date. The start date is easy: Start Date = date( concat_ws( '-', year, Start Month, Start Day ) ) The end date is not so simple. Update 2 Here is what I was thinking about for obtaining the end year: end_year = case sign( diff( date( concat_ws( year, start_month, start_day ) ), date( concat_ws( year, end_month, end_day ) ) ) ) when -1 then Start_Year + 1 else Start_Year end case Then wrap that expression (once syntactically correct) inside of another date, followed by BETWEEN statement. Update 3 To clear up some confusion: there is no end year. The end year must be calculated. Thank you!

    Read the article

  • What tricks do you use to get yourself "in the zone"?

    - by Tim Jarvis
    Once I am "in the zone" I am extremely productive and code just flows out of me, often I can get 2 or 3 days coding done in 1 day. But I find that often its hard to get to that place, I find myself procrastinating, getting distracted by other things (SO for example). Is this experience common? How do you force yourself into that state of mind? Is it simply something you can't force?

    Read the article

  • Slope requires a real as parameter 2?

    - by Dave Jarvis
    Question How do you pass the correct value to udf_slope's second parameter type? Attempts CAST(Y.YEAR AS FLOAT), but that failed (SQL error). Y.YEAR + 0.0, but that failed, too (see error message). slope(D.AMOUNT, 1.0), failed as well Error Message Using udf_slope fails due to: Can't initialize function 'slope'; slope() requires a real as parameter 2 Code SELECT D.AMOUNT, Y.YEAR, slope(D.AMOUNT, Y.YEAR + 0.0) as SLOPE, intercept(D.AMOUNT, Y.YEAR + 0.0) as INTERCEPT FROM YEAR_REF Y, DAILY D Here, D.AMOUNT is a FLOAT and Y.YEAR is an INTEGER. Create Function The slope function was created as follows: CREATE AGGREGATE FUNCTION slope RETURNS REAL SONAME 'udf_slope.so'; Function Signature From udf_slope.cc: double slope( UDF_INIT* initid, UDF_ARGS* args, char* is_null, char* is_error ) Example Usages Reading the fine manual reveals: UDF intercept() Calculates the intercept of the linear regression of two sets of variables. Function name intercept Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT intercept(income,age) FROM customers UDF slope() Calculates the slope of the linear regression of two sets of variables. Function name slope Input parameter(s) 2 (dependent variable: REAL, independent variable: REAL) Examples SELECT slope(income,age) FROM customers Thoughts? Thank you!

    Read the article

  • Eliminate subquery for average numeric value

    - by Dave Jarvis
    Quest A query selects locations that begin with Vancouver, which are in a 5 minute radius from one another. SQL Code The following SQL abomination does the trick: SELECT NAME FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%' AND LATITUDE BETWEEN (SELECT round((min(LATITUDE) + max(LATITUDE)) / 2)-5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') and (SELECT round((min(LATITUDE) + max(LATITUDE)) / 2)+5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') AND LONGITUDE BETWEEN (SELECT round((min(LONGITUDE) + max(LONGITUDE)) / 2)-5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') and (SELECT round((min(LONGITUDE) + max(LONGITUDE)) / 2)+5 FROM STATION WHERE DISTRICT_ID = '110' AND NAME LIKE 'Vancouver%') ORDER BY LATITUDE Question How can this query be simplified to remove the redundancy, without using a view? Restrictions The database is MySQL, but ANSI SQL is always nice. Thank you!

    Read the article

  • Fill data gaps - UNION, PARTITION BY, or JOIN?

    - by Dave Jarvis
    Problem There are data gaps that need to be filled. Would like to avoid UNION or PARTITION BY if possible. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Data Sources The severity codes and incident type codes are from: severity_vw incident_type_vw The columns are: incident_tally severity_cd incident_typ_cd Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Question How would you use UNION, PARTITION BY, or LEFT JOIN to fill in the zero counts?

    Read the article

  • How to Fix my jQuery code in IE?? Works in Firefox..

    - by scott jarvis
    I am using jQuery to show/hide a div container (#pluginOptionsContainer), and load a page (./plugin_options.php) inside it with the required POST vars sent. What POST data is sent is based on the value of a select list (#pluginDD) and the click of a button (#pluginOptionsBtn)... It works fine in Firefox, but doesn't work in IE.. The '$("#pluginOptionsContainer").load()' request never seems to finish in IE - I only see the loading message forever... bind(), empty() and append() all seem to work fine in IE.. But not load().. Here is my code: // wait for the DOM to be loaded $(document).ready(function() { // hide the plugin options $('#pluginOptionsContainer').hide(); // This is the hack for IE if ($.browser.msie) { $("#pluginDD").click(function() { this.blur(); this.focus(); }); } // set the main function $(function() { // the button shows hides the plugin options page (and its container) $("#pluginOptionsBtn") .click(function() { // show the container of the plugin options page $('#pluginOptionsContainer').empty().append('<div style="text-align:center;width:99%;">Loading...</div>'); $('#pluginOptionsContainer').toggle(); }); // set the loading message if user changes selection with either the dropdown or button $("#pluginDD,#pluginOptionsBtn").bind('change', function() { $('#pluginOptionsContainer').empty().append('<div style="text-align:center;width:99%;">Loading...</div>'); }); // then update the page when the plugin is changed when EITHER the plugin button or dropdown or clicked or changed $("#pluginDD,#pluginOptionsBtn").bind('change click', function() { // set form fields as vars in js var pid = <?=$pid;?>; var cid = <?=$contentid;?>; var pDD = $("#pluginDD").val(); // add post vars (must use JSON) to be sent into the js var 'dataString' var dataString = {plugin_options: true, pageid: pid, contentid: cid, pluginDD: pDD }; // include the plugin option page inside the container, with the required values already added into the query string $("#pluginOptionsContainer").load("/admin/inc/edit/content/plugin_options.php#pluginTop", dataString); // add this to stop page refresh return false; }); // end submit function }); // end main function }); // on DOM load Any help would be GREATLY appreciated! I hate IE!

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • Update multiple table column values using single query

    - by Dave Jarvis
    How would you update data in multiple tables using a single query? In MySQL it would be: UPDATE party p LEFT JOIN party_name n ON p.party_id = n.party_id LEFT JOIN party_details d ON p.party_id = d.party_id LEFT JOIN incident_participant ip ON ip.party_id = p.party_id LEFT JOIN incident i ON ip.incident_id = i.incident_id SET p.employee_id = NULL, c.em_address = '[email protected]', c.ad_postal = 'x', n.first_name = 'x', n.last_name = 'x' WHERE i.confidential_dt IS NOT NULL What is the equivalent SQL statement using Oracle 11g? (Is ANSI SQL possible?) Thank you!

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • Resort (reorder) table data

    - by Dave Jarvis
    What is the fastest way to reorder (resort) a table? In MySQL, you can do: CREATE TABLE table2 LIKE table1; INSERT INTO table2 SELECT * FROM table1 ORDER BY by name; DROP TABLE table1; RENAME TABLE table2 TO table1; What about PostgreSQL? Is there a faster way?

    Read the article

  • Reporting Trend Line

    - by Dave Jarvis
    Hi, How would you create a line in JasperReports that follows the trend for the data, in addition to showing the data points? Here are before and after shots: Before After The Time Series report does not appear to have any such option to draw the orange line. (The orange line should be smooth, and thinner, but that's the general idea.) Any ideas how to craft such a report?

    Read the article

  • Transmogrify into date from multiple columns

    - by Dave Jarvis
    What is a better way to program the following SQL: str_to_date( concat(convert(D.DAY, CHAR(2)), '-', convert(M.MONTH, CHAR(2)), '-', convert(M.YEAR, CHAR(4))), '%e-%m-%Y' ) as AMOUNT_DATE I want to convert the columns D.DAY, M.MONTH, and M.YEAR to a date field using MySQL. The above works, but seems much more complicated than it needs to be. (If there's an ANSI SQL way to do it, that would be even better.) Thank you!

    Read the article

  • MySQL Query still executing after a day..?

    - by Matt Jarvis
    Hi - I'm trying to isolate duplicates in a 500MB database and have tried two ways to do it. One creating a new table and grouping: CREATE TABLE test_table as SELECT * FROM items WHERE 1 GROUP BY title; But it's been running for an hour and in MySQL Admin it says the status is Locked. The other way I tried was to delete duplicates with this: DELETE bad_rows.* from items as bad_rows inner join ( select post_title, MIN(id) as min_id from items group by title having count(*) 1 ) as good_rows on good_rows.post_title = bad_rows.post_title; ..and this has been running for 24hours now, Admin telling me it's Sending data... Do you think either or these queries are actually still running? How can I find out if it's hung? (with Apple OS X 10.5.7)

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • Client Server Assembly Missing? Db4Objects 7.4

    - by Tim Jarvis
    I have downloaded the current version of Db4Objects (7.4) and installed it. It appears to be missing the Client Server assembly Db4objects.Db4o.CS.dll Does anyone know if Client Server has changed with this version? If it has, does anyone have some details about creating a simple Server?

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >