Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 209/293 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • Fastest way to move records from a oracle DB into MS sql server after processing

    - by user347748
    Hi.. Ok this is the scenario...I have a table in Oracle that acts like a queue... A VB.net program reads the queue and calls a stored proc in MS SQL Server that processes and then inserts the message into another SQL server table and then deletes the record from the oracle table. We use a datareader to read the records from Oracle and then call the stored proc for each of the records. The program seems to be a little slow. The stored procedure itself isnt slow. The SP by itself when called in a loop can process about 2000 records in 20 seconds. BUt when called from the .Net program, the execution time is about 5 records per second. I have seen that most of the time consumed is in calling the stored procedure and waiting for it to return. Is there a better way of doing this? Here is a snippet of the actual code Function StartDataXfer() As Boolean Dim status As Boolean = False Try SqlConn.Open() OraConn.Open() c.ErrorLog(Now.ToString & "--Going to Get the messages from oracle", 1) If GetMsgsFromOracle() Then c.ErrorLog(Now.ToString & "--Got messages from oracle", 1) If ProcessMessages() Then c.ErrorLog(Now.ToString & "--Finished Processing all messages in the queue", 0) status = True Else c.ErrorLog(Now.ToString & "--Failed to Process all messages in the queue", 0) status = False End If Else status = True End If StartDataXfer = status Catch ex As Exception Finally SqlConn.Close() OraConn.Close() End Try End Function Private Function GetMsgsFromOracle() As Boolean Try OraDataAdapter = New OleDb.OleDbDataAdapter OraDataTable = New System.Data.DataTable OraSelCmd = New OleDb.OleDbCommand GetMsgsFromOracle = False With OraSelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = GetMsgSql End With OraDataAdapter.SelectCommand = OraSelCmd OraDataAdapter.Fill(OraDataTable) If OraDataTable.Rows.Count > 0 Then GetMsgsFromOracle = True End If Catch ex As Exception GetMsgsFromOracle = False End Try End Function Private Function ProcessMessages() As Boolean Try ProcessMessages = False PrepareSQLInsert() PrepOraDel() i = 0 Dim Method As Integer Dim OraDataRow As DataRow c.ErrorLog(Now.ToString & "--Going to call message sending procedure", 2) For Each OraDataRow In OraDataTable.Rows With OraDataRow Method = GetMethod(.Item(0)) SQLInsCmd.Parameters("RelLifeTime").Value = c.RelLifetime SQLInsCmd.Parameters("Param1").Value = Nothing SQLInsCmd.Parameters("ID").Value = GenerateTransactionID() ' Nothing SQLInsCmd.Parameters("UID").Value = Nothing SQLInsCmd.Parameters("Param").Value = Nothing SQLInsCmd.Parameters("Credit").Value = 0 SQLInsCmd.ExecuteNonQuery() 'check the return value If SQLInsCmd.Parameters("ReturnValue").Value = 1 And SQLInsCmd.Parameters("OutPutParam").Value = 0 Then 'success 'delete the input record from the source table once it is logged c.ErrorLog(Now.ToString & "--Moved record successfully", 2) OraDataAdapter.DeleteCommand.Parameters("P(0)").Value = OraDataRow.Item(6) OraDataAdapter.DeleteCommand.ExecuteNonQuery() c.ErrorLog(Now.ToString & "--Deleted record successfully", 2) OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Committed record successfully", 2) i = i + 1 Else 'failure c.ErrorLog(Now.ToString & "--Failed to exec: " & c.DestIns & "Status: " & SQLInsCmd.Parameters("OutPutParam").Value & " and TrackId: " & SQLInsCmd.Parameters("TrackID").Value.ToString, 0) End If If File.Exists("stop.txt") Then c.ErrorLog(Now.ToString & "--Stop File Found", 1) 'ProcessMessages = True 'Exit Function Exit For End If End With Next OraDataAdapter.Update(OraDataTable) c.ErrorLog(Now.ToString & "--Updated Oracle Table", 1) c.ErrorLog(Now.ToString & "--Moved " & i & " records from Oracle to SQL Table", 1) ProcessMessages = True Catch ex As Exception ProcessMessages = False c.ErrorLog(Now.ToString & "--MoveMsgsToSQL: " & ex.Message, 0) Finally OraDataTable.Clear() OraDataTable.Dispose() OraDataAdapter.Dispose() OraDelCmd.Dispose() OraDelCmd = Nothing OraSelCmd = Nothing OraDataTable = Nothing OraDataAdapter = Nothing End Try End Function Public Function GenerateTransactionID() As Int64 Dim SeqNo As Int64 Dim qry As String Dim SqlTransCmd As New OleDb.OleDbCommand qry = " select seqno from StoreSeqNo" SqlTransCmd.CommandType = CommandType.Text SqlTransCmd.Connection = SqlConn SqlTransCmd.CommandText = qry SeqNo = SqlTransCmd.ExecuteScalar If SeqNo > 2147483647 Then qry = "update StoreSeqNo set seqno=1" SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = 1 Else qry = "update StoreSeqNo set seqno=" & SeqNo + 1 SqlTransCmd.CommandText = qry SqlTransCmd.ExecuteNonQuery() GenerateTransactionID = SeqNo End If End Function Private Function PrepareSQLInsert() As Boolean 'function to prepare the insert statement for the insert into the SQL stmt using 'the sql procedure SMSProcessAndDispatch Try Dim dr As DataRow SQLInsCmd = New OleDb.OleDbCommand With SQLInsCmd .CommandType = CommandType.StoredProcedure .Connection = SqlConn .CommandText = SQLInsProc .Parameters.Add("ReturnValue", OleDb.OleDbType.Integer) .Parameters("ReturnValue").Direction = ParameterDirection.ReturnValue .Parameters.Add("OutPutParam", OleDb.OleDbType.Integer) .Parameters("OutPutParam").Direction = ParameterDirection.Output .Parameters.Add("TrackID", OleDb.OleDbType.VarChar, 70) .Parameters.Add("RelLifeTime", OleDb.OleDbType.TinyInt) .Parameters("RelLifeTime").Direction = ParameterDirection.Input .Parameters.Add("Param1", OleDb.OleDbType.VarChar, 160) .Parameters("Param1").Direction = ParameterDirection.Input .Parameters.Add("TransID", OleDb.OleDbType.VarChar, 70) .Parameters("TransID").Direction = ParameterDirection.Input .Parameters.Add("UID", OleDb.OleDbType.VarChar, 20) .Parameters("UID").Direction = ParameterDirection.Input .Parameters.Add("Param", OleDb.OleDbType.VarChar, 160) .Parameters("Param").Direction = ParameterDirection.Input .Parameters.Add("CheckCredit", OleDb.OleDbType.Integer) .Parameters("CheckCredit").Direction = ParameterDirection.Input .Prepare() End With Catch ex As Exception c.ErrorLog(Now.ToString & "--PrepareSQLInsert: " & ex.Message) End Try End Function Private Function PrepOraDel() As Boolean OraDelCmd = New OleDb.OleDbCommand Try PrepOraDel = False With OraDelCmd .CommandType = CommandType.Text .Connection = OraConn .CommandText = DelSrcSQL .Parameters.Add("P(0)", OleDb.OleDbType.VarChar, 160) 'RowID .Parameters("P(0)").Direction = ParameterDirection.Input .Prepare() End With OraDataAdapter.DeleteCommand = OraDelCmd PrepOraDel = True Catch ex As Exception PrepOraDel = False End Try End Function WHat i would like to know is, if there is anyway to speed up this program? Any ideas/suggestions would be highly appreciated... Regardss, Chetan

    Read the article

  • Inner join 2 tables one to many 2 where clauses

    - by user2892350
    I'm a relative rookie at this,so please bear with me... I have 2 tables: OrderDetail and OrderMaster...both have a column named SalesOrder. OrderDetail table has multiple rows per unique SalesOrder. OrderMaster table has one row per unique SalesOrder. OrderDetail has a column named LineType. OrderMaster has a column named OrderStatus. I want to select all records from OrderDetail that have a LineType of "1" AND whose matching SalesOrder line in the OrderMaster table has a OrderStatus column value of "4". In plain English, orders with a Status 4 are open and ready to ship, LineType value of 1 means the Detail Line is a product code. How should this query be structured? It's going into VS 2008 (VB). Many thanks in advance!!! Mike

    Read the article

  • Why MySQL multiple-column index is overpopulated?

    - by actual
    Consider following MySQL table: CREATE TABLE `log` ( `what` enum('add', 'edit', 'remove') CHARACTER SET ascii COLLATE ascii_bin NOT NULL, `with` int(10) unsigned NOT NULL, KEY `with_what` (`with`,`what`) ) ENGINE=InnoDB; INSERT INTO `log` (`what`, `with`) VALUES ('add', 1), ('edit', 1), ('add', 2), ('remove', 2); As I understand, with_what index must have 2 unique entries on its first with level and 3 unique entries in what "subindex". But MySQL reports 4 unique entries for each level. In other words, number of unique elements for each level is always equal to number of rows in log table. Is that a bug, a feature or my misunderstanding?

    Read the article

  • Getting DateTimeOffset value form SQL 2008 to C#

    - by Darvis Lombardo
    I have a SQL 2008 table with a field called RecDate of type DateTimeOffset. For a given record the value is '2010-04-01 17:19:23.62 -05:00' In C# I create a DataTable and fill it with the results of "SELECT RecDate FROM MyTable". I need to get the milliseconds, but if I do the following the milliseconds are always 0: DateTimeOffset dto = DateTimeOffset.Parse(dt.Rows[0][0].ToString()); What is the proper way to get the value in the RecDate column into the dto variable? Thanks! Darvis

    Read the article

  • Indexing affects only the WHERE clause?

    - by andre matos
    If I have something like: CREATE INDEX idx_myTable_field_x ON myTable USING btree (field_x); SELECT COUNT(field_x), field_x FROM myTable GROUP BY field_x ORDER BY field_x; Imagine myTable with around 500,000 rows and most of field_x values being unique. Since I don't use any WHERE clause, will the created index have any effect at all in my query? Edit: I'm asking this question because I don't get any relevant difference between query-times before and after creating the index; They always take about 8 seconds (which, of course is too much time!). Is this behaviour expected?

    Read the article

  • Dividing a 9x9 2d array into 9 sub-grids (like in sudoku)? (C++)

    - by kevin
    I'm trying to code a sudoku solver, and the way I attempted to do so was to have a 9x9 grid of pointers that hold the address of "set" objects that posses either the solution or valid possible values. I was able to go through the array with 2 for loops, through each column first and then going to the next row and repeating. However, I'm having a hard time imagining how I would designate which sub-grid (or box, block etc) a specific cell belongs to. My initial impression was to have if statements in the for loops, such as if row < 2 (rows start at 0) & col < 2 then we're in the 1st block, but that seems to get messy. Would there be a better way of doing this?

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • How to insert null value for numeric field of a datatable in c#?

    - by Pandiya Chendur
    Consider My dynamically generated datatable contains the following fileds Id,Name,Mob1,Mob2 If my datatable has this it get inserted successfully, Id Name Mob1 Mob2 1 acp 9994564564 9568848526 But when it is like this it gets failed saying, Id Name Mob1 Mob2 1 acp 9994564564 The given value of type String from the data source cannot be converted to type decimal of the specified target column. I generating my datatable by readingt a csv file, CSVReader reader = new CSVReader(CSVFile.PostedFile.InputStream); string[] headers = reader.GetCSVLine(); DataTable dt = new DataTable(); foreach (string strHeader in headers) { dt.Columns.Add(strHeader); } string[] data; while ((data = reader.GetCSVLine()) != null) { dt.Rows.Add(data); } Any suggestion how to insert null value for numeric field during BulkCopy in c#... EDIT: I tried this dt.Columns["Mob2"].AllowDBNull = true; but it doesn't seem to work...

    Read the article

  • How can I do this with MySQL partitions

    - by Uffo
    I have a table with millions of rows and I want to create some partions, but I really don't know how I can to this. I mean I want to have the data which is starting with the ID 1 - 10000 to be on partition one, and and the data that is starting with the ID 10001 - 20000 to be on partition two; and so on...?Can you give me an example how to do it? I have searched a lot on the internet and I read a lot of documentation, but I still don't understand how it needs to be done! Best Regards,

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • Export MySQL Data as Insert Statements

    - by gav
    Hi All, I'm working in Ubuntu with MySql and I also have Query Browser and Administrator installed, I'm not afraid of the command line either if it helps. I want simply to be able to run a query and see a result set but then convert that result set into a series of commands that could be used to create the same rows in a table of an identical schema. I hope the question makes sense, it's quite a simple problem and one that must have been solved but I can't for the life of me work out where this kind of conversion is made available. Thanks in advance, Gav

    Read the article

  • Processing variable number of form fields

    - by a_m0d
    I am working on a form which displays information about orders. Each order has a unique id, but they are not necessarily sequential on the form. Also, the number of fields can vary (one field per row on the form). The input into the form will not be mapped straight into the database, but will be added to the current value in the database, and then saved. An example of the form is in the picture below - the callout on the right shows the id for each row. I know how to generate the form like this, but I can't work out how I can easily process each of these rows reliably. I also know how to give each of the fields a unique identifier, like name="row-23", but how can I translate that name so that I can update the related record in the database?

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • get best streak with all details of each row

    - by Pritesh Gupta
    ID user_id win 1 1 1 2 1 1 3 1 0 4 1 1 5 2 1 6 2 0 7 2 0 8 2 1 9 1 0 10 1 1 11 1 1 12 1 1 13 1 1 14 3 1 I needs to get the complete row for the best streak. I am able to get best streak no. for each user using this post. get consecutive records in mysql Now, I needs to get each rows data for that are involved in the best streak. i.e. for user=1, I need the complete row data for id=10,11,12,13. i.e. user_id,win and id value for these best streak row using mysql. Kindly help me on this.

    Read the article

  • Reading a text file in MATLAB line by line

    - by Hossein
    Hi, I have a CSV file, I want to read this file and do some pre-calculations on each row to see for example that row is useful for me or not and if yes I save it to a new CSV file. can someone give me an example? in more details this is how my data looks like: (string,float,float) the numbers are coordinates. ABC,51.9358183333333,4.183255 ABC,51.9353866666667,4.1841 ABC,51.9351716666667,4.184565 ABC,51.9343083333333,4.186425 ABC,51.9343083333333,4.186425 ABC,51.9340916666667,4.18688333333333 basically i want to save the rows that have for distances more than 50 or 50 in a new file.the string field should also be copied. thanks

    Read the article

  • What is the most efficient way to delete all selected items in a ListViewItem collection

    - by Andrew
    My user is able to select multiple items in a ListView collection that is configured to show details (that is, a list of rows). What I want to do is add a Delete button that will delete all of the selected items from the ListViewItem collection associated with the ListView. The collection of selected items is available in ListView.SelectedItems, but ListView.Items doesn't appear to have a single method that lets me delete the entire range. I have to iterate through the range and delete them one by one, which will potentially modify a collection I'm iterating over. Any hints? Edit: What I'm basically after is the opposite of AddRange().

    Read the article

  • How to implement partial refresh like facebook like/comments?

    - by shillong
    We have java web application. Summary page will display list of rows. For each row, user can vote and add comments. Vote or add comments will commit immediately and refresh total vote number and comments count. We want to refresh current row instead of whole table just like Facebook does. If need, we can show the list of data with form format (iterator List of data) instead of table format. How to implement this feature base on JSF?

    Read the article

  • JAVA. Writing a matrix in a file using column information.

    - by Dmitry
    Hello, everybody! I have a file in which a matrix is stored. This file has a RandomAccessFile type. This matrix is stored by columns. I mean that in an i-th row of this matrix an i-th column (of a real matrix) is stored. There is an example: i-th row: 1 2 3 4 (in the file). That means that the real matrix has an i-th column: (1 2 3 4)(transpose). I need to save this matrix in natural way (by rows) in a new file, which I will then open with FileReader and display with TestArea. DO you know, how to do that? If so, please help =)

    Read the article

  • insert data into several tables

    - by csetzkorn
    Let us say I have a table (everything is very much simplified): create table OriginalData ( bla char(10) not null ) And I would like to insert its data (set based!) into two tables which model inheritance create table Statements ( Id int IDENTITY NOT NULL, ProposalDateTime DATETIME null ) create table Items ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Statements is the parent table and Items is the child table. I have no problem doing this with one row which involves the use of IDENT_CURRENT but I have no idea how to do this set based (i.e. enter several rows into both tables). Thanks. Best wishes, Christian

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • Converting code to perl sub, but not sure I'm doing it right

    - by Ben Dauphinee
    I'm working from a question I posted earlier (here), and trying to convert the answer to a sub so I can use it multiple times. Not sure that it's done right though. Can anyone provide a better or cleaner sub? sub search_for_key { my ($args) = @_; foreach $row(@{$args->{search_ary}}){ print "@$row[0] : @$row[1]\n"; } my $thiskey = NULL; my @result = map { $args->{search_ary}[$_][0] } # Get the 0th column... grep { @$args->{search_in} =~ /$args->{search_ary}[$_][1]/ } # ... of rows where the 0 .. $#array; # first row matches $thiskey = @result; print "\nReturning: " . $thiskey . "\n"; return $thiskey; } search_for_key({ 'search_ary' => $ref_cam_make, 'search_in' => 'Canon EOS Rebel XSi' });

    Read the article

  • k-means clustering in R on very large, sparse matrix?

    - by movingabout
    Hello, I am trying to do some k-means clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a couple of "1" values per row). The whole thing does not fit into memory, so I converted it into a sparse ARFF file. But R obviously can't read the sparse ARFF file format. I also have the data as a plain CSV file. Is there any package available in R for loading such sparse matrices efficiently? I'd then use the regular k-means algorithm from the cluster package to proceed. Many thanks

    Read the article

  • Reading a part of a alpha numeric string in SQL

    - by novice
    I have a table with one column " otname " table1.otname contains multiple rows of alpha-numeric string resembling the following data sample: 11.10.32.12.U.A.F.3.2.21.249.1 2001.1.1003.8281.A.LE.P.P 2010.1.1003.8261.A.LE.B.B I want to read the fourth number in every string ( part of the string in bold ) and write a query in Oracle 10g to read its description stored in another table. My dilemma is writing the first part of the query.i.e. choosing the fourth number of every string in a table My second query will be something like this: select description_text from table2 where sncode = 8281 -- fourth part of the data sample in every string Many thanks. novice

    Read the article

  • Extracting Data Daily from MySQL to a Local MySQL DB

    - by Sunny Juneja
    I'm doing some experiments locally that require some data from a production MySQL DB that I only have read access to. The schemas are nearly identical with the exception of the omission of one column. My goal is to write a script that I can run everyday that extracts the previous day's data and imports it into my local table. The part that I'm most confused about is how to download the data. I've seen names like mysqldump be tossed around but that seems a way to replicate the entire database. I would love to avoid using php seeing as I have no experience with it. I've been creating CSVs but I'm worried about having the data integrity (what if there is a comma in a field or a \n) as well as the size of the CSV (there are several hundred thousand rows per day).

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >