Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 23/188 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How do I update a cumulative field in a Rails database (using ActiveRecord or Mongoid)?

    - by picardo
    I want to update a field in a database table that has to have a cumulative value. So basically I need to find the current value of the field and update it using a new number. My first inefficient try at this (in Mongoid) is: v = Landlord.where(:name=>"Lorem") v.update_attributes(:violations=>v.violations + 10) Is there a simple method than making one query to read, then sum up, and another query to write?

    Read the article

  • Need help with Binary Columns

    - by nusrath
    Hi, I have to query a column containing a Binary column with data like this ®â{Õ¦K!Eòû¦?;#§ø. How can i query this colmn..if I wanted something like: SELECT Name FROM Users WHERE ID = ®â{Õ¦K!Eòû¦?;#§ø THanks

    Read the article

  • Convert charset in mysql query

    - by Yousf
    Hi, I have a question about converting charset from inside mysql query. I have a 2 databases. One for the website (joomla), the other for forum (IPB). I am doing query from inside joomla, which by default have "SET NAMES UTF8". I want to query a table inside the forum databases. A table called "ibf_topics". This table has latin1 encoding. I do the following to select anything from the not-utf8 table. //convert connection to handle latin1. $query = "SET NAMES latin1"; $db->setQuery($query); $db->query(); $query = "select id, title from other_database.ibf_topics"; $db->setQuery($query); $db->query(); //read result into an array. //return connection to handle UTF8. $query = "SET NAMES UTF8"; $db->setQuery($query); $db->query(); After that, when I want to use the selected tile, I use the following: echo iconv("CP1256", "UTF-8", $topic['title']) The question is, is there anyway to avoid all this hassle? For now, I can't change forum database to UTF8 and I can't change joomla database to latin1 :S

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • Can I query inside a SQLite Transaction?

    - by CSharperWithJava
    I'm using the SQLite3 database system in the Android library. I need to execute a query during a transaction to see if there is a similar entry already there. If there is, I have to perform some other logic and adjustments before I add a new row. Can I execute a query within a transaction and get the result back immediately?

    Read the article

  • Look over my C# SQLite Query, what am I doing wrong?

    - by CODe
    I'm writing a WinForms database application using SQLite and C#. I have a sqlite query that is failing, and I'm unsure as to where I'm going wrong, as I've tried everything I could think of. public DataTable searchSubs(String businessName, String contactName) { string SQL = null; if ((businessName != null && businessName != "") && (contactName != null && contactName != "")) { // provided business name and contact name for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE BusinessName LIKE %@BusinessName% AND Contact LIKE %@ContactName%"; } else if ((businessName != null && businessName != "") && (contactName == null || contactName == "")) { // provided business name only for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE BusinessName LIKE %@BusinessName%"; } else if ((businessName == null || businessName == "") && (contactName != null && contactName != "")) { // provided contact name only for search SQL = "SELECT * FROM SUBCONTRACTOR WHERE Contact LIKE %@ContactName%"; } else if ((businessName == null || businessName == "") && (contactName == null || contactName == "")) { // provided no search information SQL = "SELECT * FROM SUBCONTRACTOR"; } SQLiteCommand cmd = new SQLiteCommand(SQL); cmd.Parameters.AddWithValue("@BusinessName", businessName); cmd.Parameters.AddWithValue("@ContactName", contactName); cmd.Connection = connection; SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); DataSet ds = new DataSet(); try { da.Fill(ds); DataTable dt = ds.Tables[0]; return dt; } catch (Exception e) { MessageBox.Show(e.ToString()); return null; } finally { cmd.Dispose(); connection.Close(); } } I continually get an error saying that it is failing near the %'s. That's all fine and dandy, but I guess I'm structuring it wrong, but I don't know where! I tried adding apostrophes around the "like" variables, like this: SQL = "SELECT * FROM SUBCONTRACTOR WHERE Contact LIKE '%@ContactName%'"; and quite honestly, that is all I can think of. Anyone have any ideas?

    Read the article

  • How to insert an n:m-relationship with technical primary keys generated by a sequence?

    - by bitschnau
    Let's say I have two tables with several fields and in every table there is a primary key which is a technical id generated by a database sequence: table1 table2 ------------- ------------- field11 <pk> field21 <pk> field12 field22 field11 and field21 are generated by sequences. Also there is a n:m-relationship between table1 und table2, designed in table3: table3 ------------- field11 <fk> field21 <fk> The ids in table1 und table2 are generated during the insert statement: INSERT INTO table1 VALUES (table1_seq1.NEXTVAL, ... INSERT INTO table2 VALUES (table2_seq1.NEXTVAL, ... Therefore I don't know the primary key of the added row in the data-access-layer of my program, because the generation of the pk happens completely in the database. What's the best practice to update table3 now? How can I gain access to the primary key of the rows I just inserted?

    Read the article

  • sql combine two subqueries

    - by Claudiu
    I have two tables. Table A has an id column. Table B has an Aid column and a type column. Example data: A: id -- 1 2 B: Aid | type ----+----- 1 | 1 1 | 1 1 | 3 1 | 1 1 | 4 1 | 5 1 | 4 2 | 2 2 | 4 2 | 3 I want to get all the IDs from table A where there is a certain amount of type 1 and type 3 actions. My query looks like this: SELECT id FROM A WHERE (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 1) = 3 AND (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 3) = 1 so on the data above, just the id 1 should be returned. Can I combine the 2 subqueries somehow?

    Read the article

  • Find maximum number of logged on users in SQL

    - by lleto
    Hi, I want to keep tabs on the number of concurrent users of my application. I therefore log a time_start and a time_stop. If I now want to query the database for the maximum number of logged on users and return the start date, how would I do that. The table looks like this: id | time_start | time_stop ----+---------------------+--------------------- 1 | 2010-03-07 05:40:59 | 2010-03-07 05:41:33 2 | 2010-03-07 06:50:51 | 2010-03-07 10:50:51 3 | 2010-02-21 05:20:00 | 2010-03-07 12:23:44 4 | 2010-02-19 08:21:12 | 2010-03-07 12:37:28 5 | 2010-02-13 05:52:13 | Where time_stop is empty the user is still logged on. In this case I would expect to see 2010-03-07 returned, since all users (5) were logged on at that moment. However if I would run the query with 'where time_start BETWEEN '2010-02-17' AND '2010-02-23' I would expect to see 2010-02-21 with a maximum of 2. Is this possible direct in SQL (using postgres) or do I need to parse the results in PHP? Thanks, lleto

    Read the article

  • Nested and complicated select statement

    - by Selase
    What i want to do here is simple...display an ivestigators ID and him corresponding name... That can be easily done from the users table by selecting based on the user type. However i want to select only some type of investigators. The analogy here is investigators are assigned to an exhibit for them to investigate. One investigator can be assigned to a maximum of 3 cases only. Now during the assigning of investigators, i want to write a select statement that would retrieve only investigatorID's that have been assigned to less than or equal to 2 cases. I have included exhibit and users table that shows sample data below. Now i sort of have an idea that i will have to first of all pick out all the investigators by their ID from the users list and then filter them through the exhibit table by dropping those assigned to 3 cases and leaving just those with two cases. then afterwards i use this IDs to select the Investigators name. the big questions is how do i write the statement??

    Read the article

  • mobile device - different background - media query not working

    - by AMC
    Live site. This is my first attempt at utilizing a different CSS for mobile devices vs. regular screens. To do this, I'm using- @media only screen and (min-device-width : 320px) and (max-device-width : 480px) { background: url('img/background.jpg') no-repeat center center fixed; background-size: cover; height: 100%; } However, it doesn't seem to be working (I can only test on iPhones). Any ideas as to why that may be? I've also tried @media all to no avail.

    Read the article

  • Recommended book for Sql Server query optimisation

    - by Patrick Honorez
    Even if I have made a certification exam on Sql Server Design and implementation , I have no clue about how to trace/debug/optimise performance in Sql Sever. Now the database I built is really business critical, and getting big, so it is time for me to dig into optimisation, specially regarding when/where to add indexes. Can you recommend a good book on this subject ? (smaller is better :) Just in case: I am using Sql Server 2008. Thanks

    Read the article

  • Is it possible to merge two MySQL databases into one?

    - by Mike L.
    Lets say I have two MySQL databases with some complex table structures. Neither database has the same table name. Lets say these tables contain no rows (they do but I could truncate the tables, the data is not important right now, just testing stuff). Lets say I need these 2 databases merged into one. For instance: DB1: cities states DB2: index subindex posts I want to end up with a single DB that contains: cities states index subindex posts Is this possible?

    Read the article

  • Any dynamic database frontend tool from which you can update directly?

    - by Enjoy coding
    Hi Gurus, This is with reference to this question where I got one tabular report format. I need to update the user entered values correctly back to the same table rows. I am in the process of doing this by using general form post data methods by using some logics which I think will not be easy to maintain. So, just out of curiosity, Is there a front end creator javascript libraries or frameworks which can create the front end for any query's resultset and updates the corresponding rows when the user updates them from front end. This need not have all the full functionality, any one usable, customizable thing will reduce the code maintenance problems. I have googled for some javascript libraris for this but not able to get which will be suitable. Please suggest any useful tools. My environment is Mysql, PHP, JQuery, XAMPP server on Windows. Is JQuery provides one. Thanks in advance.

    Read the article

  • Calculate value from two field in third field

    - by terence6
    I'm trying to create a query, that will calculate sum of products on invoice. I have 3 tables : Product (with product's price) Invoice (with invoice id) Products on invoice (with invoice id, product id and number of particular products) So in my query I take invoice_id (from invoice), price (from product),number of products sold and invoice_id (from products on invoice) and calculate their product in fourth column. I know I sohuld use 'Totals' but how to achieve that ? Model:

    Read the article

  • Conditional PIVOT/transform problem

    - by IanC
    Hi folks I have a table with three columns, which we'll call ID1, ID2, and Value. Sample data: ID ID1 Value 1 1 0 1 2 1 1 3 1 1 3 2 1 4 0 1 4 1 1 5 0 1 5 2 2 1 2 Value is limited to 0, 1, or 2. What I need to do is pivot/transform this data into a column-based count of how many times each possible Value appears, grouped by ID, ID1. The output of the above should be: ID ID1 Val0 Val1 Val2 1 1 1 0 0 1 2 0 2 0 1 3 0 1 1 1 4 1 1 0 1 5 1 0 1 2 1 0 0 1 I'm using SQL Server 2008. How do I do this?

    Read the article

  • Deleting unneeded rows from a table with 2 criteria

    - by stormbreaker
    Hello. I have a many-to-many relations table and I need to DELETE the unneeded rows. The lastviews table's structure is: | user (int) | document (int) | time (datetime) | This table logs the last users which viewed the document. (user, document) is unique. I show only the last 10 views of a document and until now I deleted the unneeded like this: DELETE FROM `lastviews` WHERE `document` = ? AND `user` NOT IN (SELECT * FROM (SELECT `user` FROM `lastviews` WHERE `document` = ? ORDER BY `time` DESC LIMIT 10) AS TAB) However, now I need to also show the last 5 documents a user has viewed. This means I can no longer delete rows using the previous query because it might delete information I need (say a user didn't view documents in 5 minutes and the rows are deleted) To sum up, I need to delete all the records that don't fit these 2 criterias: SELECT ... FROM `lastviews` WHERE `document` = ? ORDER BY `time` DESC LIMIT 10 and SELECT * FROM `lastviews` WHERE `user` = ? ORDER BY `time` DESC LIMIT 0, 5 I need the logic. Thanks in advance.

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Minimizing calls to database in rails

    - by ming yeow
    Hi guys, i am familiar with memcached and eager loading, but neither seems to solve the problem i am facing. My main performance lag comes from hundreds of data retrieval calls from the database. The tricky thing is that I do not know which set of users i need to retrieve until i have several steps of computation. I can refactor my code, but i was wondering how you experts handle this situation? I think it should be a fairly common situation def newsfeed - find out which users i need - retrieve those users via DB - find out which events happened for these users - for each of those events - retrieve new set of users - find out which groups are relevant - for each of those groups - retrieve new set of users - etc, etc end

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Implementing "select distinct ... from ..." over a list of Python dictionaries

    - by daveslab
    Hi folks, Here is my problem: I have a list of Python dictionaries of identical form, that are meant to represent the rows of a table in a database, something like this: [ {'ID': 1, 'NAME': 'Joe', 'CLASS': '8th', ... }, {'ID': 1, 'NAME': 'Joe', 'CLASS': '11th', ... }, ...] I have already written a function to get the unique values for a particular field in this list of dictionaries, which was trivial. That function implements something like: select distinct NAME from ... However, I want to be able to get the list of multiple unique fields, similar to: select distinct NAME, CLASS from ... Which I am finding to be non-trivial. Is there an algorithm or Python included function to help me with this quandry? Before you suggest loading the CSV files into a SQLite table or something similar, that is not an option for the environment I'm in, and trust me, that was my first thought.

    Read the article

  • How to make the most of GWT's "Search queries"?

    - by DisgruntledGoat
    I've been looking at the "Search queries" section in Google Webmaster Tools recently, and it seems like there is a lot of potential there in finding which pages on a site need improvement. I'm trying to figure out exactly what to sort or filter on. Do I look at pages with a low average position? Low impressions but high clicks? Pages that are rising up/falling down the rankings? What is the low-hanging fruit here?

    Read the article

  • how can I stop my sonicwall TZ-210 (SonicOS Enhanced 5.5.1.0-5o) from responding to arp queries on the wan subnet?

    - by IsaacB
    My sonicwall TZ-210 is answering arp queries on the wan subnet (which my isp doesn't like), basically mapping all the wan ips to its own mac address, causing network havoc since it is not set to route those back to the main isp gateway. How the heck can I turn this behavior off? I have already entered in all wan subnet ips in the static arp cache and left them 'unpublished' which I presumed would mean that it did not bother answering arp queries for them. Apparently it did not do the trick. Arp queries are still being answered unfortunately. What can I do? Any suggestions?

    Read the article

  • How can I tell if my ISP is redirecting my DNS queries?

    - by Nack
    I've attempted to use some DNS services like OpenDNS, and no matter what I do the DNS queries don't return the expected results. Watching the packet traffic on my firewall, I can see the queries go out to the intended DNS server address and responses coming back, but the results are not as expected, for example, the OpenDNS test page always fails even though the requests appear to be going to their servers. I suspect my ISP is intercepting DNS queries and sending them to their own servers. Is there a way to verify this? Is there something else I might be missing? I'm using 3G wireless service from Sprint.

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >