Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 209/293 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Paginating with SQL ROW_NUMBER

    - by Kelsey Thorpe
    I want to return search results in paginated format. However I can't seem to successfully get the first 10 results of my query. The problem is the 'RowNum' returned are like 405, 687, 1024 etc. I want them to be renumbered as 1,2,3,4,5 etc., so that when I specify between rows 1 and 20 i get the first 20 search results. Instead, because the numbers are larger, I get no results between 1 and 10. If i change RowNum condition to: AND RowNum < 20000 I get plenty of results Here's the sql: SELECT * FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY DocumentID ) AS RowNum, * FROM Table ) AS RowConstrainedResult WHERE RowNum >= 1 AND RowNum < 20 AND Title LIKE '%diabetes%' AND Title LIKE '%risk%' Any help appreciated.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Why MySQL multiple-column index is overpopulated?

    - by actual
    Consider following MySQL table: CREATE TABLE `log` ( `what` enum('add', 'edit', 'remove') CHARACTER SET ascii COLLATE ascii_bin NOT NULL, `with` int(10) unsigned NOT NULL, KEY `with_what` (`with`,`what`) ) ENGINE=InnoDB; INSERT INTO `log` (`what`, `with`) VALUES ('add', 1), ('edit', 1), ('add', 2), ('remove', 2); As I understand, with_what index must have 2 unique entries on its first with level and 3 unique entries in what "subindex". But MySQL reports 4 unique entries for each level. In other words, number of unique elements for each level is always equal to number of rows in log table. Is that a bug, a feature or my misunderstanding?

    Read the article

  • Export MySQL Data as Insert Statements

    - by gav
    Hi All, I'm working in Ubuntu with MySql and I also have Query Browser and Administrator installed, I'm not afraid of the command line either if it helps. I want simply to be able to run a query and see a result set but then convert that result set into a series of commands that could be used to create the same rows in a table of an identical schema. I hope the question makes sense, it's quite a simple problem and one that must have been solved but I can't for the life of me work out where this kind of conversion is made available. Thanks in advance, Gav

    Read the article

  • Getting DateTimeOffset value form SQL 2008 to C#

    - by Darvis Lombardo
    I have a SQL 2008 table with a field called RecDate of type DateTimeOffset. For a given record the value is '2010-04-01 17:19:23.62 -05:00' In C# I create a DataTable and fill it with the results of "SELECT RecDate FROM MyTable". I need to get the milliseconds, but if I do the following the milliseconds are always 0: DateTimeOffset dto = DateTimeOffset.Parse(dt.Rows[0][0].ToString()); What is the proper way to get the value in the RecDate column into the dto variable? Thanks! Darvis

    Read the article

  • Oracle, slow performance when using sub select

    - by Wyass
    I have a view that is very slow if you fetch all rows. But if I select a subset (providing an ID in the where clause) the performance is very good. I cannot hardcode the ID so I create a sub select to get the ID from another table. The sub select only returns one ID. Now the performance is very slow and it seems like Oracle is evaluating the whole view before using the where clause. Can I somehow help Oracle so SQL 2 and 3 have the same performance? I’m using Oracle 10g 1 slow select * from ci.my_slow_view 2 fast select * from ci.my_slow_view where id = 1; 3 slow select * from ci.my_slow_view where id in (select id from active_ids)

    Read the article

  • Dynamic columns in C# rdlc report

    - by Mugume David
    Suppose I have a report that lists employees (as rows) with their respective taxes charged (in columns). It is possible for a new tax to come up. Since my rdlc report file is currently designed (from XML of-course) to statically generate the coulumns. A future shift in events will need me to alter the rdlc file and add in a new column. how can i do this dynamically. I intend to avoid opening the rdlc file and adding XML code.

    Read the article

  • Alter multiple tables' columns length

    - by gdoron
    So, we just found out that 254 tables in our Oracle DBMS have one column named "Foo" with the wrong length- Number(10) instead of Number(3). That foo column is a part from the PK of the tables. Those tables have other tables with forigen keys to it. What I did is: backed-up the table with a temp table. Disabled the forigen keys to the table. Disabled the PK with the foo column. Nulled the foo column for all the rows. Restored all the above But now we found out it's not just couple of tables but 254 tables. Is there an easy way, (or at least easier than this) to alter the columns length? P.S. I have DBA permissions.

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Rails fixtures seem to be adding extra unexpected data

    - by Mason Jones
    Hello, all. I've got a dynamic fixture CSV file that's generating predictable data for a table in order for my unit tests to do their thing. It's working as expected and filling the table with the data, but when I check the table after the tests run, I'm seeing a number of additional rows of "blank" data (all zeros, etc). Those aren't being created by the fixture, and the unit tests are read-only, just doing selects, so I can't blame the code. There doesn't seem to be any logging done during the fixtures setup, so I can't see when the "blank" data is being inserted. Anyone ever run across this before, or have any ideas of how to log or otherwise see what the fixture setup is doing in order to trace down the source of the blank data?

    Read the article

  • mysql select query optimization

    - by Saharsh Shah
    I have two table testa & testb. CREATE TABLE `testa` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `testb` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `name` VARCHAR(50) DEFAULT NULL, `aid1` INT(10) DEFAULT NULL, `aid2` INT(10) DEFAULT NULL, `aid3` INT(10) DEFAULT NULL, PRIMARY KEY (`id`) ); Currently I am running below query for retrieving all rows where id in testa table matches with any columns of aid1,aid2,aid3 in tableb. The query is retreiving acurate result but it is taking minimum 30 seconds to execute which is too much. I have also tried to optimise my query using UNION but failed to do so. SELECT a.id, a.name, b.name, b.id FROM testb b INNER JOIN testa a ON b.aid1 = a.id OR b.aid2 = a.id OR b.aid3 = a.id ; How do i optimize my query so it's total execution time is within 2-3 seconds? Thanks in advance...

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • Inner join 2 tables one to many 2 where clauses

    - by user2892350
    I'm a relative rookie at this,so please bear with me... I have 2 tables: OrderDetail and OrderMaster...both have a column named SalesOrder. OrderDetail table has multiple rows per unique SalesOrder. OrderMaster table has one row per unique SalesOrder. OrderDetail has a column named LineType. OrderMaster has a column named OrderStatus. I want to select all records from OrderDetail that have a LineType of "1" AND whose matching SalesOrder line in the OrderMaster table has a OrderStatus column value of "4". In plain English, orders with a Status 4 are open and ready to ship, LineType value of 1 means the Detail Line is a product code. How should this query be structured? It's going into VS 2008 (VB). Many thanks in advance!!! Mike

    Read the article

  • How to implement partial refresh like facebook like/comments?

    - by shillong
    We have java web application. Summary page will display list of rows. For each row, user can vote and add comments. Vote or add comments will commit immediately and refresh total vote number and comments count. We want to refresh current row instead of whole table just like Facebook does. If need, we can show the list of data with form format (iterator List of data) instead of table format. How to implement this feature base on JSF?

    Read the article

  • Weird Animation when inserting a row in UITableView

    - by svveet
    I have tableview which holds a list of data, when users press "Edit" i add an extra row at the bottom saying "add new" - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; NSArray *paths = [NSArray arrayWithObject: [NSIndexPath indexPathForRow:1 inSection:0]]; if (editing) { [[self tableView] insertRowsAtIndexPaths:paths withRowAnimation:UITableViewRowAnimationTop]; } else { [[self tableView] deleteRowsAtIndexPaths:paths withRowAnimation:UITableViewRowAnimationTop]; } } and of course the UITableView transform with animation, but weird thing is the row before the row that i just added, has different animation than all others. all rows perform "slides in" animation, but that 2nd last one did a "fade in" animation. and i didnt set any animation on the 2nd last row (or any other row), and if i didnt add the new row in, the animation slides in as normal when i switch to editing mode. somehow i cant find an answer, i check with contact app on my phone, it didnt have that weird animation as i have, when they adding a new row on editing mode. Any help would be appreciated. Thanks

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • get best streak with all details of each row

    - by Pritesh Gupta
    ID user_id win 1 1 1 2 1 1 3 1 0 4 1 1 5 2 1 6 2 0 7 2 0 8 2 1 9 1 0 10 1 1 11 1 1 12 1 1 13 1 1 14 3 1 I needs to get the complete row for the best streak. I am able to get best streak no. for each user using this post. get consecutive records in mysql Now, I needs to get each rows data for that are involved in the best streak. i.e. for user=1, I need the complete row data for id=10,11,12,13. i.e. user_id,win and id value for these best streak row using mysql. Kindly help me on this.

    Read the article

  • Converting code to perl sub, but not sure I'm doing it right

    - by Ben Dauphinee
    I'm working from a question I posted earlier (here), and trying to convert the answer to a sub so I can use it multiple times. Not sure that it's done right though. Can anyone provide a better or cleaner sub? sub search_for_key { my ($args) = @_; foreach $row(@{$args->{search_ary}}){ print "@$row[0] : @$row[1]\n"; } my $thiskey = NULL; my @result = map { $args->{search_ary}[$_][0] } # Get the 0th column... grep { @$args->{search_in} =~ /$args->{search_ary}[$_][1]/ } # ... of rows where the 0 .. $#array; # first row matches $thiskey = @result; print "\nReturning: " . $thiskey . "\n"; return $thiskey; } search_for_key({ 'search_ary' => $ref_cam_make, 'search_in' => 'Canon EOS Rebel XSi' });

    Read the article

  • Query filter design for string field

    - by Midhat
    A field in my table can have arbitrary strings. On the UI, there is a drop down having options like All, Value1, Value2 And the results were filtered by the selected option value. So far this is easy and adding new filters to the UI is not a problem. Needs no changes in my stored procedure. Now I want to have an "Others" option here as well, which will return rows not having the column value as Value1 or Value2. Apparently this will require a "not in" operator in my query, and will make maintenance difficult, as the list of values is likely to change Any suggestions, design tips?

    Read the article

  • JAVA. Writing a matrix in a file using column information.

    - by Dmitry
    Hello, everybody! I have a file in which a matrix is stored. This file has a RandomAccessFile type. This matrix is stored by columns. I mean that in an i-th row of this matrix an i-th column (of a real matrix) is stored. There is an example: i-th row: 1 2 3 4 (in the file). That means that the real matrix has an i-th column: (1 2 3 4)(transpose). I need to save this matrix in natural way (by rows) in a new file, which I will then open with FileReader and display with TestArea. DO you know, how to do that? If so, please help =)

    Read the article

  • Voting Script, Possibility of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience Edit In the votes table, vote_id is a primary key In the post table, post_id is a primary key. Any other suggestions to speed things up?

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • scroll bar in textareas

    - by Hulk
    Int the following code, The scroll bar appears in IE and in mozilla it doesnt,how is this to be fixed,scroll bar should not appear where there is not much of data. <script> var row= '<table><tr>'; row = '<tr class="display_row"">'; row += '<td class="display_col" wrap width="75"><b><textarea rows = "8" cols = "18" border ="1" class="input" style="border: none;overflow:visible;width:95%;" readonly maxlength="5">Name selected is Tom </textarea>'; row+='</td></tr></table>'; </script>

    Read the article

  • MYSQL trigger gets deleted automatically

    - by Mirage
    I have using mysql 5.1 with cpanel /whm centOS. I had to use trigger for one of my website. so i installed trigger as root so that when something gets inserted on one table there some more rows gets inserted in other table Everything was working fine, but i have seen that there is no trigger in my dtabase. How does that be deleted from DB. I am bit worried because currently site is not live , but it can cause problem if this happens in live site. Does any mysql updation cause the trigger to delete. but i have no updated. How can i make sure it don't happen in future Thanks

    Read the article

  • MySQL Query WHERE Including CASE or IF?

    - by handfix
    Strange problem. My Query looks like SELECT DISTINCT ID, `etcetc`, `if/elses over muliple joined tables` FROM table1 AS `t1` # some joins, eventually unrelated in that context WHERE # some standard where statements, they work/ CASE WHEN `t1`.`field` = "foo" THEN (`t1`.`anOtherField` != 123 AND `t1`.`anOtherField` != 456 AND `t1`.`anOtherOtherField` != "some String") WHEN `t1`.`field` = "bar" THEN `t1`.`aSecondOtherField` != 12345 END #ORDER BY CASE etc. Standard Stuff Apperantly MySQL returns a wrong rowcount and I think my problem is in the logic of the WHERE ... CASE statement. Maybe with the brackets? Maybe I should go for operator OR and not AND? Should my the second WHEN include brackets also, even when I only compare one field? Should I use IF and not CASE? Basically I want to exclude some rows with specific values IF theres a specific value in field foo or bar I would try that all out, but it takes a huge amount of time to complete that query... :(

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >