Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 156/293 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • What is the performance of "Merge" clause in sql server 2008?

    - by ziang
    Hi, Merge can performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table. Is anyone familiar with the performance to use "Merge" versus the traditional logic to check existence and decide the update or insert then? Thanks!

    Read the article

  • To get data from table in script from sql server 2005

    - by Zerotoinfinite
    Hi Experts, I am using sql server 2005 I have a table [say tblHistory] and this table contains 100 rows. I have created the same table at the server, but the table doesn't have the data, I want data from tblHistory to convert into INSERT INTO tblHistory ------ so that I could run the script on the server to fill the database. Please help, it's urgent

    Read the article

  • JDBC Code Change From SQL Server to Oracle

    - by BeginnerAmongBeginners
    In the JDBC code, I have the following that is working with SQL Server: CallableStatement stmt = connection.prepareCall("{ call getName() }"); ResultSet rs = stmt.executeQuery(); if(rs != null) { while(rs.next()) { //do something with rs.getString("name") } } Multiple rows are returned for the above situation. I understand that the use of a cursor is required to loop through the table in Oracle, but is there any way to keep the above code the same and accomplish the same thing? Thanks in advance.

    Read the article

  • Reset SQL variable inside SELECT statement

    - by Jason McCreary
    I am trying to number some rows on a bridge table with a single UPDATE/SELECT statement using a counter variable @row. For example: UPDATE teamrank JOIN (SELECT @row := @row + 1 AS position, name FROM members) USING(teamID, memberID) SET rank = position Is something like this possible or do I need to create a cursor? If it helps, I am using MySQL 5.

    Read the article

  • Zend_Paginator / Doctrine 2

    - by Kevin
    I'm using Doctrine 2 with my Zend Framework application and a typical query result could yield a million (or more) search results. I want to use Zend_Paginator in line with this result set. However, I don't want to return all the results as an array and use the Array adapter as this would be inefficient, instead I would like to supply the paginator the total amount of rows then and array of results based on limit/offset amounts. Is this doable using the Array adapter or would I need to create my own pagination adapter?

    Read the article

  • How to get only the first row from a java.sql.ResultSet?

    - by llm
    I have a ResultSet object containing all the rows returned from an sql query. I want to be able to (in the java code, NOT force it in the SQL) to be able to take a ResultSet and transform it so that it only contains 1 (the first) row. What would be the way to acheive this? Also, is there another appropriate class (somewhere in java.sql or elsewhere) for storing just a single row rather than trimming my ResultSet? Thanks!

    Read the article

  • How efficient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

  • How to access Excel Max Column value?

    - by Phsika
    i try to create table from excel rows. however; excel columns : column1 has max 200 character row column2 has max 300 character row column3 has max 500 character row So i need to create sql create MyTable column3 nvarchar(500) according to Excel Max Character.

    Read the article

  • Setting time to 23:59:59

    - by Mike Wills
    I need to compare a date range and am missing rows who's date is the upper comparison date but the time is higher than midnight. Is there a way to set the upper comparison's time to 23:59:59?

    Read the article

  • Reading a file in C++ which has integers

    - by Avinash
    I want to read following file in C++. 000001011100110 100000010101100 001001001001100 110110000000011 000000010110011 011000110101110 111010011011110 011001010010000 I know already how many rows and columns is there in the file. I want to read each integer and store it in a 2-D matrix of ints. Each integers here means 0 is one entry and 1 is another entry. So in this example above there are 15 0's and 1s.

    Read the article

  • Is is faster to filter and get data or filter then get data ?

    - by remi bourgarel
    Hi I have this kind of request : SELECT myTable.ID, myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE EXISTS(SELECT * FROM myLink WHERE myLink.FID = myTable.ID and myLink.FID2 = 666) myLink has a lot of rows. Do you think it's faster to do like this : SELECT myLink.FID INTO @result FROM myLink WHERE myLink.FID2 = 666 UPDATE @result SET Adress = myTable.Adress, -- 20 more columns of all kind of type FROM myTable WHERE myTable.ID = @result.ID

    Read the article

  • Rewrite the foreach using lambda + C#3.0

    - by Newbie
    I am tryingv the following foreach (DataRow dr in dt.Rows) { if (dr["TABLE_NAME"].ToString().Contains(sheetName)) { tableName = dr["TABLE_NAME"].ToString(); } } by using lambda like string tableName = ""; DataTableExtensions.AsEnumerable(dt).ToList().ForEach(i => { tableName = i["TABLE_NAME"].ToString().Contains(sheetName); } ); but getting compile time error "cannot implicitly bool to string". So how to achieve the same.? Thanks(C#3.0)

    Read the article

  • MySQL MATCH AGAINST functionality....

    - by Webnet
    Currently I have the following query... SELECT id, LOWER(title) as title, LOWER(sub_title) as sub_title FROM ebay_archive_listing WHERE MATCH(title, sub_title) AGAINST ("key" IN BOOLEAN MODE) However it is not finding rows where the title contains the word "key". "key" is generated dynamically based on a set of keywords, so sometimes it contains + and - symbols.

    Read the article

  • Increment my id in my insert request

    - by Mercer
    hello, i have a table with some rows. idClient, name, adress,country,... i want to know how i can do an insert into this table with auto increment my idClient in my sql request..? Thx. edit: i want do a request like this insert into Client values((select max(idClient),...)

    Read the article

  • Why is sys+user > real in "time command"?

    - by shadyabhi
    I have a program that uses pthread library to do the matrix multiplication of 500x500 matrix. Each thread calculates 50 rows of the matrix. When I run tiem command:- shadyabhi@shadyabhi-desktop:~$ time ./a.out real 0m0.383s user 0m0.810s sys 0m0.000s shadyabhi@shadyabhi-desktop:~$ How come sys+user is greater than real time?

    Read the article

  • Getting duplicate count when executing INSERT IGNORE via JDBC

    - by Nickolay Komar
    Is it possible to get the duplicate count when executing MySQL "INSERT IGNORE" statement via JDBC? For example, when I execute an INSERT IGNORE statement on the mysql command line, and there are duplicates I get something like Query OK, 0 rows affected (0.02 sec) Records: 1 Duplicates: 1 Warnings: 0 Note where it says "Duplicates: 1", indicating that there were duplicates that were ignored. Is it possible to get the same information when executing the query via JDBC? Thanks.

    Read the article

  • SQL Server- PIVOT table. transform row into columns

    - by Matt
    I am trying to convert rows into columns. here is my query SELECT M.urn, M.eventdate, M.eventlocation, M.eventroom, M.eventbed, N.time FROM admpatevents M INNER JOIN admpattransferindex N ON M.urn = N.urn AND M.eventseqno = N.eventseqno AND M.eventdate = N.eventdate WHERE M.urn = 'F1002754364' AND M.eventcode = 'TFRADMIN' Current result URN Date Location Room Bed Time F1002754364 20121101 EDEXPRESS 4-152 02 0724 F1002754364 20121101 CARDSURG 3-110 02 1455 F1002754364 20121102 CHEST UNIT 6-129-GL04 1757 required result F1002754364 20121101 EDEXPRESS 4-152 02 0724 20121101 CARDSURG 3-110 02 1455 20121102 CHEST UNIT 6-129-GL 04 1757 Thanks for your help.

    Read the article

  • Getting a count of users each day in Mondrian MDX

    - by user1874144
    I'm trying to write a query to give me the total number of users for each customer per day. Here is what I have so far, which for each customer/day combination is giving the total number of user dimension entries without splitting them up by customer/day. WITH MEMBER [Measures].[MyUserCount] AS COUNT(Descendants([User].CurrentMember, [User].[User Name]), INCLUDEEMPTY) SELECT NON EMPTY CrossJoin([Date].[Date].Members, [Customer].[Customer Name].Members) ON ROWS, {[Measures].[MyUserCount]} on COLUMNS FROM [Users]

    Read the article

  • Is Cassandra database row size limited by available memory?

    - by Adam Hollidge
    I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >