Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 680/1344 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • Getting "too many rows" error inside a "for" cursor loop

    - by Will
    I have a trigger that contains two cursors loops, one nested inside the other like this: FOR outer_rec IN outer_cursor LOOP FOR inner_rec IN inner_cursor LOOP -- Do some calculations END LOOP; END LOOP; Somewhere in this it is throwing the following error: ORA-01422: exact fetch returns more than requested number of rows I've been trying to determine where it's coming from for an hour or so.. but should this error never happen? Also.. I am assuming the inner loop automatically closes and opens itself again every time the outer loop goes the next record, i hope this is correct.

    Read the article

  • Subsonic SELECT FROM msdb

    - by Lukasz Lysik
    Hi, I want to execute the following query using Subsonic: SELECT MAX([restore_date]) FROM [msdb].[dbo].[restorehistory] While the aggregate part is easy for me, the problem is with the name of the table. How should I force Subsonic to select from different database than default one.

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • getting complete sql query in jython

    - by kdev
    result=sqlstring.executeQuery("select distinct table_name,owner from all_tables ") rs.append(str(i)+' , '+result.getString("table_name")+' , '+result.getString("owner")) If i want to display the query select * from all_tables or ' select count(*) from all_tables' how can i get the output to display . Please suggest thanks

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • multi-row update table with "different" data

    - by kralco626
    I think the best way to explain this is to tell you what I have. I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B. something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1. The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one. please let me know if you need clarification

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • Query to add a column depending of outcome of there columns

    - by Tam
    I have a user table 'users' that has fields like: id first_name last_name ... and have another table that determines relationships: user_id friend_id user_accepted friend_accepted .... I would like to generate a query that selects all the users but also add another field/column say 'network_status' that depends on the values of user_accepted and fiend_accepted. For example, if user_accepted is true friend_accepted is false I want the 'network_status' field to say 'request sent'. Can I possibly do this in one query? (I would prefer not to user if/else inside the query but if that's the only way so be it)

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • how can add an extra select in this query?

    - by BulgedSnowy
    i've three tables related. images: id | filename | filesize | ... nodes: image_id | tag_id tags: id | name And i'm using this query to search images containing x tags SELECT images.* FROM images INNER JOIN nodes ON images.id = nodes.image_id WHERE tag_id IN (SELECT tags.id FROM tags WHERE tags.tag IN ("tag1","tag2")) GROUP BY images.id HAVING COUNT(*)= 2 The problem is that i need to retrieve also all images contained by the retrieved image, and i need this in the same query. This the actual query wich search retrieve all tags contained by the image: SELECT tag FROM nodes JOIN tags ON nodes.tag_id = tags.id WHERE image_id = images.id and nodes.private = images.private ORDER BY tag How can i mix this two to have only one query?

    Read the article

  • PHP MSSQL : How to display output when query return no row

    - by vamps
    i have a problem with my PHP-MSSQL query. i have a join table that need to give a result something be like this: Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 the query will count how many employee as per given date (admin will enter data and once submitted, the query will give the above result). The problem is, the final output is a mess when there's no row to be displayed. the column is shifted to the right. i.e: only Group A in IT only Group B in Admin Department Group A Group B Total A+B WORKHOUR A OTHOUR A WORKHOUR B OTHOUR B WORKHOUR OTHOUR HR 10 15 25 0 35 15 IT 5 5 5 5 Admin 12 12 12 12 my question is, how to prevent this to happen? i've tried everything with While.... if else.. but the result is still the same. how to display output "0" if no rows to return? echo "0"; this is my QUERY: select DD.DPT_ID,DPT.DEPARTMENT_NAME,TU.EMP_GROUP, sum(DD.WORK_HOUR) AS WORK_HOUR, sum(DD.OT_HOUR) AS OT_HOUR FROM DEPARTMENT_DETAIL DD left join DEPARTMENT DPT ON (DD.DEPT_ID=DPT.DEPT_ID) LEFT JOIN TBL_USERS TU ON (TU.EMP_ID=DD.EMP_ID) WHERE DD_DATE>='2012-01-01' AND DD_DATE<='2012-01-31' AND TU.EMP_GROUP!=2 GROUP BY DD.DEPT_ID, DPT.DEPARTMENT_NAME,TU.EMP_GROUP ORDER BY DPT.DEPARTMENT_NAME this is one of the logic that i've used, but doesn't return the result that i want:: while($row = mssql_fetch_array($displayResult)) { if ((!$row["WORK_HOUR"])&&(!$row["OT_HOUR"])) { echo "<td >"; echo "empty"; echo "&nbsp;</td>"; echo "<td >"; echo "empty"; echo "&nbsp;</td>"; } else { echo "<td>"; echo $row["WORK_HOUR"]; echo "&nbsp;</td>"; echo "<td>"; echo $row["OT_HOUR"]; echo "&nbsp;</td>"; } } please help. i've been doing this for 2 days. @__@

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Extract time part from TimeStamp column in ORACLE

    - by RRUZ
    Actually i' am using MyTimeStampField-TRUNC(MyTimeStampField) to extract the time part from an timestamp column in Oracle. SELECT CURRENT_TIMESTAMP-TRUNC(CURRENT_TIMESTAMP) FROM DUAL this return +00 13:12:07.100729 this work ok for me, to extract the time part from an timestamp field, but i' m wondering if exist a better way (may be using an built-in function of ORACLE) to do this? Thanks.

    Read the article

  • Select rows where column LIKE dictionary word

    - by Gerve
    I have 2 tables: Dictionary - Contains roughly 36,000 words CREATE TABLE IF NOT EXISTS `dictionary` ( `word` varchar(255) NOT NULL, PRIMARY KEY (`word`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Datas - Contains roughly 100,000 rows CREATE TABLE IF NOT EXISTS `datas` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `hash` varchar(32) NOT NULL, `data` varchar(255) NOT NULL, `length` int(11) NOT NULL, `time` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `hash` (`hash`), KEY `data` (`data`), KEY `length` (`length`), KEY `time` (`time`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=105316 ; I would like to somehow select all the rows from datas where the column data contains 1 or more words. I understand this is a big ask, it would need to match all of these rows together in every combination possible, so it needs the best optimization. I have tried the below query, but it just hangs for ages: SELECT `datas`.*, `dictionary`.`word` FROM `datas`, `dictionary` WHERE `datas`.`data` LIKE CONCAT('%', `dictionary`.`word`, '%') AND LENGTH(`dictionary`.`word`) > 3 ORDER BY `length` ASC LIMIT 15 I have also tried something similar to the above with a left join, and on clause that specified the like statement.

    Read the article

  • Which MySql line is faster:

    - by Camran
    I have a classified_id variable which matches one document in a MySql table. I am currently fetching the information about that one record like this: SELECT * FROM table WHERE table.classified_id = $classified_id I wonder if there is a faster approach, for example like this: SELECT 1 FROM table WHERE table.classified_id = $classified_id Wont the last one only select 1 record, which is exactly what I need, so that it doesn't have to scan the entire table but instead stops searching for records after 1 is found? Or am I dreaming this? Thanks

    Read the article

  • mysql: inserting data and autoincrement

    - by every_answer_gets_a_point
    i am converting from access to mysql i have a table in access where one of the columns is an autonumber when i transfer the data into the mysql database (where i also have a column that is auto_increment), should i be transfering the auto_increment data into the auto_increment column, or will it auto_increment itself? how do i ensure that if i do not transfer the autoincrement data from access, that it auto_increments properly?

    Read the article

  • I want to do a sql update loop statement, by using the do--while in php

    - by Jean
    Hello, I want to loop the update statement, but it only loops once. Here is the code I am using: do { mysql_select_db($database_ll, $ll); $query_query= "update table set ex='$71[1]' where field='val'"; $query = mysql_query($query_query, $ll) or die(mysql_error()); $row_domain_all = mysql_fetch_assoc($query); } while ($row_query = mysql_fetch_assoc($query)); Thanks Jean

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • how to find end of quarter given a date in the quarter

    - by Ramy
    If i'm given a date (say @d = '11-25-2010'), how can I determine the end of the quarter from that date. I'd like to use a timestamp one second before midnight. I can get this: select dateadd(qq, datediff(qq, 0, getdate()), 0) as quarterStart which gives me: '10-1-2010' and I use this for one second before midnight of a given day: select DateAdd(second, -1, DateAdd(day, DateDiff(day, 0, @d))+1, 0) ) AS DayEnd in the end, a quarterEnd method would give me '12-31-2010 23:59:00'

    Read the article

  • join codition in sqlserver

    - by Pallavi
    after applying join condition on two tables i want records which is maximum among records of left table my query SELECT a1.*, t.*, ( a1.trnratefrom - t.trnratefrom )AS minrate, ( a1.trnrateto - t.trnrateto ) AS maxrate FROM (SELECT a.srno, trndate, b.trnsrno, Upper(Rtrim(Ltrim(b.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(b.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(b.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(b.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(b.trnunit))) AS trnunit, b.trnratefrom, b.trnrateto, a.remark, entdate FROM mstprodrates a INNER JOIN trnprodrates b ON a.srno = b.srno)a1 INNER JOIN (SELECT c.srno, trndate, d.trnsrno, Upper(Rtrim(Ltrim(d.trnstate))) AS trnstate, Upper(Rtrim(Ltrim(d.trnarea))) AS trnarea, Upper(Rtrim(Ltrim(d.trnquality))) AS trnquality, Upper(Rtrim(Ltrim(d.trnlength))) AS trnlength, Upper(Rtrim(Ltrim(d.trnunit))) AS trnunit, d.trnratefrom, d.trnrateto, c.remark, entdate FROM mstprodrates c INNER JOIN trnprodrates d ON c.srno = d.srno) AS t ON a1.trnstate = t.trnstate AND a1.trnquality = t.trnquality AND a1.trnunit = t.trnunit AND a1.trnlength = t.trnlength AND a1.trnarea = t.trnarea AND a1.remark = t.remark WHERE t.srno = (SELECT MAX(srno) FROM a1 WHERE srno < a1.srno)

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >