Search Results

Search found 454 results on 19 pages for 'tablename'.

Page 2/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • FindBugs - how to solve EQ_COMPARETO_USE_OBJECT_EQUALS

    - by Trick
    I am clueless here... 1: private static class ForeignKeyConstraint implements Comparable<ForeignKeyConstraint> { 2: String tableName; 3: String fkFieldName; 4: 5: public int compareTo(ForeignKeyConstraint o) { 6: if (this.tableName.compareTo(o.tableName) == 0) { 7: return this.fkFieldName.compareTo(o.fkFieldName); 8: } 9: return this.tableName.compareTo(o.tableName); 10: } 11: } In line 6 I get from FindBugs: Bug: net.blabla.SqlFixer$ForeignKeyConstraint defines compareTo(SqlFixer$ForeignKeyConstraint) and uses Object.equals() Link to definition I don't know how to correct this :S

    Read the article

  • Lua - Iterate Through Table With nil Values

    - by Tony Trozzo
    My lua function receives a table that is of the array form: { field1, field2, nil, field3, } No keys, only values. I'm trying to convert this to a pseudo CSV form by grabbing all the fields and concatenating them into a string. Here is my function: function ArenaRewind:ConvertToCSV(tableName) csvRecord = "\n" for i,v in pairs(tableName) do if v == nil then v = "nil" end csvRecord = csvRecord .. "\"" .. v .. "\"" if i ~= #tableName then csvRecord = csvRecord .. "," end end return csvRecord end Not the prettiest code by any means, but it seems to iterate through them and grab all the non-nil values. The other table iteration function is ipairs() which stops as soon as it hits a nil value. Is there any easy way to grab all of these fields including the nil values? The tables are various sizes, so I hope to refrain from accessing each part like an array [i.e., tableName[1] through tableName[4]) and just grabbing the nil values that way. Thanks in advance.

    Read the article

  • problem in jdbc preparestatement

    - by akshay
    i am geting error when i try to use following,why is it so? ResultSet findByUsername(String tablename,String field,String value) { pStmt = cn.prepareStatement("SELECT * FROM" + tablename +" WHERE ? = ? "); pStmt.setString(1, tablename); pStmt.setString(2,field); pStmt.setString(3,value); return(pStmt.executeQuery()); } also i tried following , but its not working too ResultSet findByUsername(String tablename,String field,String value) { String sqlQueryString = " SELECT * FROM " + tablename +" WHERE " + filed + "= ? ") cn.prepareStatement(sqlQuery); pStmt.setString(1, value); return(pStmt.executeQuery()); }

    Read the article

  • asp.net Dynamic Data Site with own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks Updated:: In List Page to display data from sysTables (metaData table) I've modified <h2 class="DDSubHeader"><%= tableName%></h2>. public string tableName; protected void Page_Init(object sender, EventArgs e) { table = DynamicDataRouteHandler.GetRequestMetaTable(Context); //added by me uqsikDataContext sd=new uqsikDataContext(); tableName = sd.sysTables.Where(n => n.sysTableName == table.DisplayName).FirstOrDefault().TableName; //end GridView1.SetMetaTable(table, table.GetColumnValuesFromRoute(Context)); GridDataSource.EntityTypeName = table.EntityType.AssemblyQualifiedName; if (table.EntityType != table.RootEntityType) { GridQueryExtender.Expressions.Add(new OfTypeExpression(table.EntityType)); } } so, what about sysColums? How can I get Data from my sysColumns table?

    Read the article

  • SQLiteDataAdapter Update method returning 0 C#

    - by Lirik
    I loaded 83 rows from my CSV file, but when I try to update the SQLite database I get 0 rows... I can't figure out what I'm doing wrong. The program outputs: Num rows loaded is 83 Num rows updated is 0 The source code is: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=Yes;FMT=Delimited\""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); Console.WriteLine("Num rows loaded is " + ds.Tags.Rows.Count); InsertData(ds, tableName); } } } public void InsertData(QuoteDataSet data, String tableName) { using (SQLiteConnection conn = new SQLiteConnection(_connectionString)) { using (SQLiteDataAdapter sqliteAdapter = new SQLiteDataAdapter("SELECT * FROM " + tableName, conn)) { using (new SQLiteCommandBuilder(sqliteAdapter)) { conn.Open(); Console.WriteLine("Num rows updated is " + sqliteAdapter.Update(data, tableName)); } } } } Any hints on why it's not updating the correct number of rows?

    Read the article

  • SQLiteDataAdapter Update method returning 0

    - by Lirik
    I loaded 83 rows from my CSV file, but when I try to update the SQLite database I get 0 rows... I can't figure out what I'm doing wrong. The program outputs: Num rows loaded is 83 Num rows updated is 0 The source code is: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=Yes;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); Console.WriteLine("Num rows loaded is " + ds.Tags.Rows.Count); InsertData(ds, tableName); } } } public void InsertData(QuoteDataSet data, String tableName) { using (SQLiteConnection conn = new SQLiteConnection(_connectionString)) { using (SQLiteDataAdapter sqliteAdapter = new SQLiteDataAdapter("SELECT * FROM " + tableName, conn)) { using (new SQLiteCommandBuilder(sqliteAdapter)) { conn.Open(); Console.WriteLine("Num rows updated is " + sqliteAdapter.Update(data, tableName)); } } } } Any hints on why it's not updating the correct number of rows?

    Read the article

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • How can I synchronise two datatables and update the target in the database?

    - by Craig
    I am trying to synchronise two tables between two databases. I thought that the best way to do this would be to use the DataTable.Merge method. This seems to pick up the changes, but nothing ever gets committed to the database. So far I have: string tableName = "[Table_1]"; string sql = "select * from " + tableName; using (SqlConnection sourceConn = new SqlConnection(ConfigurationManager.ConnectionStrings["source"].ConnectionString)) { SqlDataAdapter sourceAdapter = new SqlDataAdapter(); sourceAdapter.SelectCommand = new SqlCommand(sql, sourceConn); sourceConn.Open(); DataSet sourceDs = new DataSet(); sourceAdapter.Fill(sourceDs); using (SqlConnection targetConn = new SqlConnection(ConfigurationManager.ConnectionStrings["target"].ConnectionString)) { SqlDataAdapter targetAdapter = new SqlDataAdapter(); targetAdapter.SelectCommand = new SqlCommand(sql, targetConn); SqlCommandBuilder builder = new SqlCommandBuilder(targetAdapter); targetAdapter.InsertCommand = builder.GetInsertCommand(); targetAdapter.UpdateCommand = builder.GetUpdateCommand(); targetAdapter.DeleteCommand = builder.GetDeleteCommand(); targetConn.Open(); DataSet targetDs = new DataSet(); targetAdapter.Fill(targetDs); targetDs.Tables[0].TableName = tableName; sourceDs.Tables[0].TableName = tableName; targetDs.Tables[0].Merge(sourceDs.Tables[0]); targetAdapter.Update(targetDs.Tables[0]); } } At the present time, there is one row in the source that is not in the target. This row is never transferred. I have also tried it with an empty target, and nothing is transferred.

    Read the article

  • MySQL db Audit Trail Trigger

    - by Natkeeran
    I need to track changes (audit trail) in certain tables in a MySql Db. I am trying to implement the solution suggested here. I have an AuditLog Table with the following columns: AuditLogID, TableName, RowPK, FieldName, OldValue, NewValue, TimeStamp. The mysql stored procedure is the following (this executes fine, and creates the procedure): The call to the procedure such as: CALL addLogTrigger('ProductTypes', 'ProductTypeID'); executes, but does not create any triggers (see the image). SHOW TRIGGERS returns empty set. Please let me know what could be the issue, or an alternate way to implement this. DROP PROCEDURE IF EXISTS addLogTrigger; DELIMITER $ CREATE PROCEDURE addLogTrigger(IN tableName VARCHAR(255), IN pkField VARCHAR(255)) BEGIN SELECT CONCAT( 'DELIMITER $\n', 'CREATE TRIGGER ', tableName, '_AU AFTER UPDATE ON ', tableName, ' FOR EACH ROW BEGIN ', GROUP_CONCAT( CONCAT( 'IF NOT( OLD.', column_name, ' <=> NEW.', column_name, ') THEN INSERT INTO AuditLog (', 'TableName, ', 'RowPK, ', 'FieldName, ', 'OldValue, ', 'NewValue' ') VALUES ( ''', table_name, ''', NEW.', pkField, ', ''', column_name, ''', OLD.', column_name, ', NEW.', column_name, '); END IF;' ) SEPARATOR ' ' ), ' END;$' ) FROM information_schema.columns WHERE table_schema = database() AND table_name = tableName; END$ DELIMITER ;

    Read the article

  • Dataset holds a table called "Table", not the table I pass in?

    - by dotnetdev
    Hi, I have the code below: string SQL = "select * from " + TableName; using (DS = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter()) using (SqlConnection sqlconn = new SqlConnection(connectionStringBuilder.ToString())) using (SqlCommand objCommand = new SqlCommand(SQL, sqlconn)) { sqlconn.Open(); adapter.SelectCommand = objCommand; adapter.Fill(DS); } System.Windows.Forms.MessageBox.Show(DS.Tables[0].TableName); return DS; However, every time I run this code, the dataset (DS) is filled with one table called "Table". It does not represent the table name I pass in as the parameter TableName and this parameter does not get mutated so I don't know where the name Table comes from. I'd expect the table to be the same as the tableName parameter I pass in? Any idea why this is not so? EDIT: Important fact: This code needs to return a dataset because I use the dataRelation object in another method, which is dependent on this, and without using a dataset, that method throws an exception. The code for that method is: DataRelation PartToIntersection = new DataRelation("XYZ", this.LoadDataToTable(tableName).Tables[tableName].Columns[0], // Treating the PartStat table as the parent - .N this.LoadDataToTable("PartProducts").Tables["PartProducts"].Columns[0]); // 1 // PartsProducts (intersection) to ProductMaterial DataRelation ProductMaterialToIntersection = new DataRelation("", ds.Tables["ProductMaterial"].Columns[0], ds.Tables["PartsProducts"].Columns[1]); Thanks

    Read the article

  • How to use AS keyword in MySql ?

    - by karthik
    In the below SP i will be getting result in One single column. How can i name the column of the output ? DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen` ( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • if statement inside array : codeigniter

    - by ahmad
    Hello , I have this function to edit all fields that come from the form and its works fine .. function editRow($tableName,$id) { $fieldsData = $this->db->field_data($tableName); $data = array(); foreach ($fieldsData as $key => $field) { $data[ $field->name ] = $this->input->post($field->name); } $this->db->where('id', $id); $this->db->update($tableName, $data); } now I want to add a condition for Password field , if the field is empty keep the old password , I did some thing like that : function editRow($tableName,$id) { $fieldsData = $this->db->field_data($tableName); $data = array(); foreach ($fieldsData as $key => $field) { if ($data[ $field->name ] == 'password' && $this->input->post('password') == '' ) { $data[ 'password' ] => $this->input->post('hide_password'), //'password' => $this->input->post('hide_password'), } else { $data[ $field->name ] => $this->input->post($field->name) } } $this->db->where('id', $id); $this->db->update($tableName, $data); } but I get error ( Parse error: syntax error, unexpected T_DOUBLE_ARROW in ... ) Html , some thing like this : <input type="text" name="password" value=""> <input type="hidden" name="hide_password" value="$row->$password" /> umm , any help ? thanks ..

    Read the article

  • Why won't C# accept a (seemingly) perfectly good Sql Server CE Query?

    - by VoidKing
    By perfectly good sql query, I mean to say that, inside WebMatrix, if I execute the following query, it works to perfection: SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER('o'), ''))) / len('o') AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%o%' UNION SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER('o'), ''))) / len('o') AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%o%' UNION SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER('o'), ''))) / len('o') AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%o%' Here i am using 'o' as a static search string to search upon. No problem, but not exeactly very dynamic. Now, when I write this query as a string in C# and as I think it should be (and even as I have done before) I get a server-side error indicating that the string was not in the correct format. Here is a pic of that error: And (although I am only testing the output, should I get it to quit erring), here is the actual C# (i.e., the .cshtml) page that queries the database: @{ Layout = "~/Layouts/_secondaryMainLayout.cshtml"; var db = Database.Open("Content"); string searchText = Request.Unvalidated["searchText"]; string selectQueryString = "SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0) AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%' + @0 + '%'"; @:beginning <br/> foreach (var row in db.Query(selectQueryString, searchText)) { @:entry @:@row.location &nbsp; @:@row.occurences &nbsp; @:@row.tableName <br/> } } Since it is erring on the foreach (var row in db.Query(selectQueryString, searchText)) line, that heavily suggests that something is wrong with my query, however, everything seems right to me about the syntax here and it even executes to perfection if I query the database (mind you, un-parameterized) directly. Logically, I would assume that I have erred somewhere with the syntax involved in parameterizing this query, however, my double and triple checking (as well as, my past experience at doing this) insists that everything looks fine here. Have I messed up the syntax involved with parameterizing this query, or is something else at play here that I am overlooking? I know I can tell you, for sure, as it has been previously tested, that the value I am getting from the query string is, indeed, what I would expect it to be, but as there really isn't much else on the .cshtml page yet, that is about all I can tell you.

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • sybase bcp import fails

    - by chromeplatedbanana
    We're trying to export some tables from our production database to our test database using bcp. The bcp export seems to work fine, but the import always fails with a data type error (see below). We tested on our test database exporting the table content to a file, then importing it in again immediately, but that failed too. e.g., bcp TABLENAME out ~/tempfile -S servername -U username generates a file as expected. If we use -c option then the number of lines is as expected. However, bcp TABLENAME in ~/tempfile -S servername -U username fails with CTLIB Message: - L0/0D/S0/N0/0/0: blk_int(): blk_layer: CT library error: Cannot find an equivalent CS_TYPE for this TDS data type 49 blk_init failed. We get this whenever we try to copy into TABLENAME, whether from the production or test table dump file. I don't understand why export and import for the same TABLENAME is generating a data type error. What am I doing wrong here? Thanks

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Table and Column Checksums

    - by Ricardo Peres
    Following my last posts on Change Data Capture and Change Tracking, here is another tip regarding tracking changes: table and colum checksums. The concept is: each time a column value changes, the checksum also changes. You can use this simple method to see if a table has changed very easily, however, beware, different column values may generate the same checksum. Here's the SQL: -- table checksum SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM TableName -- column checksum SELECT CHECKSUM_AGG(BINARY_CHECKSUM(ColumnName)) FROM TableName -- integer column checksum SELECT CHECKSUM_AGG(IntegerColumnName) FROM TableName Here are the reference links on the CHECKSUM, CHECKSUM_AGG and BINARY_CHECKSUM functions: CHECKSUM CHECKSUM_AGG BINARY_CHECKSUM SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Sqlite catalog and schema settings (for MyBatis generator)

    - by user1769754
    I have been trying to use the MyBatis generator on a Sqlite database but can't seem to find what settings to use for the XML attributes catalog and schema. The generator errors with "Generation Warnings Occured: Table configuration with catalog main, schema sqlite_master, and table testTable did not resolve to any tables" I can't find much from the sqlite website other than this which gives something like catalog = main, schema = sqlite or sqlite_master My generator XML file is <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE generatorConfiguration PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN" "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd" > <generatorConfiguration > <context id="context"> <jdbcConnection driverClass="org.sqlite.JDBC" connectionURL="jdbc:sqlite:testDB.sqlite" userId="" password="" ></jdbcConnection> <javaModelGenerator targetPackage="model" targetProject="test/src" ></javaModelGenerator> <sqlMapGenerator targetPackage="model" targetProject="test/src" ></sqlMapGenerator> <javaClientGenerator targetPackage="model" targetProject="test" type="XMLMAPPER" ></javaClientGenerator> <table catalog="main" schema="sqlite_master" tableName="testTable" > <property name="useActualColumnNames" value="true"/> </table> </context> </generatorConfiguration> I have also tried other combinations like <table catalog="main" schema="sqlite" tableName="testTable" > <table schema="sqlite" tableName="testTable" > <table schema="sqlite_master" tableName="testTable" > <table schema="main.testTable" tableName="testTable" > Anyone here know the right settings?

    Read the article

  • SQLiteDataAdapter Fill exception C# ADO.NET

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here?

    Read the article

  • mysql PDO how to bind LIKE

    - by dmontain
    In this query select wrd from tablename WHERE wrd LIKE '$partial%' I'm trying to bind the variable '$partial%' with PDO. Not sure how this works with the % at the end. Would it be select wrd from tablename WHERE wrd LIKE ':partial%' where :partial is bound to $partial="somet" or would it be select wrd from tablename WHERE wrd LIKE ':partial' where :partial is bound to $partial="somet%" or would it be something entirely different?

    Read the article

  • How to get the Output value of SP using C#

    - by karthik
    I am using the following Code to execute the SP of MySql and get the output value. I need to get the output value to my c# after SP is executed. How ? Thanks. Code : public static string GetInsertStatement(string DBName, string TblName, string ColName, string ColValue) { string strData = ""; MySqlConnection conn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); MySqlCommand cmd = conn.CreateCommand(); try { cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "InsGen"; cmd.Parameters.Clear(); cmd.Parameters.Add("in_db", MySqlDbType.VarChar, 20); cmd.Parameters["in_db"].Value = DBName; cmd.Parameters.Add("in_table", MySqlDbType.VarChar, 20); cmd.Parameters["in_table"].Value = TblName; cmd.Parameters.Add("in_ColumnName", MySqlDbType.VarChar, 20); cmd.Parameters["in_ColumnName"].Value = ColName; cmd.Parameters.Add("in_ColumnValue", MySqlDbType.VarChar, 20); cmd.Parameters["in_ColumnValue"].Value = ColValue; conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } catch (System.Exception e) { Console.WriteLine(e.Message); } return strData; } SP : DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen` ( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; select Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Stored procedure to remove FK of a given table

    - by Nicole
    I need to create a stored procedure that: Accepts a table name as a parameter Find its dependencies (FKs) Removes them Truncate the table I created the following so far based on http://www.mssqltips.com/sqlservertip/1376/disable-enable-drop-and-recreate-sql-server-foreign-keys/ . My problem is that the following script successfully does 1 and 2 and generates queries to alter tables but does not actually execute them. In another word how can execute the resulting "Alter Table ..." queries to actually remove FKs? CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50)) AS BEGIN SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName) END EXEC DropDependencies 'TableName' Any idea is appreciated! Update: I added the cursor to the SP but I still get and error: "Msg 203, Level 16, State 2, Procedure DropRestoreDependencies, Line 75 The name 'ALTER TABLE [dbo].[ChildTable] DROP CONSTRAINT [FK__ChileTable__ParentTable__745C7C5D]' is not a valid identifier." Here is the updated SP: CREATE PROCEDURE DropRestoreDependencies(@schemaName sysname, @tableName sysname) AS BEGIN SET NOCOUNT ON DECLARE @operation VARCHAR(10) SET @operation = 'DROP' --ENABLE, DISABLE, DROP DECLARE @cmd NVARCHAR(1000) DECLARE @FK_NAME sysname, @FK_OBJECTID INT, @FK_DISABLED INT, @FK_NOT_FOR_REPLICATION INT, @DELETE_RULE smallint, @UPDATE_RULE smallint, @FKTABLE_NAME sysname, @FKTABLE_OWNER sysname, @PKTABLE_NAME sysname, @PKTABLE_OWNER sysname, @FKCOLUMN_NAME sysname, @PKCOLUMN_NAME sysname, @CONSTRAINT_COLID INT DECLARE cursor_fkeys CURSOR FOR SELECT Fk.name, Fk.OBJECT_ID, Fk.is_disabled, Fk.is_not_for_replication, Fk.delete_referential_action, Fk.update_referential_action, OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name, schema_name(Fk.schema_id) AS Fk_table_schema, TbR.name AS Pk_table_name, schema_name(TbR.schema_id) Pk_table_schema FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName OPEN cursor_fkeys FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER WHILE @@FETCH_STATUS = 0 BEGIN -- create statement for dropping FK and also for recreating FK IF @operation = 'DROP' BEGIN -- drop statement SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] DROP CONSTRAINT [' + @FK_NAME + ']' EXEC @cmd -- create process DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT -- create cursor to get FK columns DECLARE cursor_fkeyCols CURSOR FOR SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name, COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008 ORDER BY Fk_Cl.constraint_column_id OPEN cursor_fkeyCols FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME SET @COUNTER = 1 SET @FKCOLUMNS = '' SET @PKCOLUMNS = '' WHILE @@FETCH_STATUS = 0 BEGIN IF @COUNTER > 1 BEGIN SET @FKCOLUMNS = @FKCOLUMNS + ',' SET @PKCOLUMNS = @PKCOLUMNS + ',' END SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']' SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']' SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME END CLOSE cursor_fkeyCols DEALLOCATE cursor_fkeyCols END FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER END CLOSE cursor_fkeys DEALLOCATE cursor_fkeys END For running use: EXEC DropRestoreDependencies dbo, ParentTable

    Read the article

  • How to Access value of dynamically created control?

    - by sharad
    i have asp.net web application in which i need to create form dynamically depend upon table which user selected.however,i created form dynamically with it's control like text box,drop downlist etc. but not able to access those controls at particular button click event. My form scenario is like this: i have dropdownlist in which i display list of tables,button control say display( on click event which i redirect page with some addition in querystring so that at page init it dynamically genrate controls) and another button control which used to access this controls value say Submit and store in database. This is my code: At Display button Click event: Response.Redirect("dynamic_main.aspx?mode=edit&tablename=" + CMBTableList.SelectedItem.ToString()); This is code For page.Init level if (Request.QueryString["mode"] != null) { if (Request.QueryString["mode"].ToString() == "edit") { Session["tablename"] = Request.QueryString["tablename"].ToString(); // if (Session["tablename"].ToString() == CMBTableList.SelectedItem.ToString()) //{ createcontrol(Session["tablename"].ToString()); // } } } i have used createcontrol() which genrate controls Code For Submit Button: string[] contid=controlid.Value.Split(','); MasterPage ctl00 = FindControl("ctl00") as MasterPage; ContentPlaceHolder cplacehld = ctl00.FindControl("ContentPlaceHolder2") as ContentPlaceHolder; Panel panel1 = cplacehld.FindControl("plnall") as Panel; Table tabl1 = panel1.FindControl("id1") as Table; foreach (string sin in contid) { string splitcvalu = sin.Split('_')[0].ToString(); ControlsAccess contaccess = new ControlsAccess(); if (splitcvalu.ToString() == "txt") { TextBox txt = (TextBox)tabl1.FindControl(sin.ToString()); //TextBox txt = cplacehld.FindControl(sin.ToString()) as TextBox; val = txt.ID; } else { DropDownList DDL = cplacehld.FindControl(sin.ToString()) as DropDownList; } } Here i am no able to access this textbox or drop downlist.i am getting id of particular control from hidden field controlid in which i have stored it during genrating controls. i have used findcontrol() it works in case of finding masterpage,contentplaceholder,table etc. but in case of textbox or other control it doesnt work it shows it returns null. Anyone can help me. Thanks

    Read the article

  • mysql query - syntax error - cannot find out why

    - by Phil Jackson
    Hi all, im taring my hair out over this one. A query is throwing an error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM, SUBJECT, DATE, READ, MAIL ) VALUES ( 'EJackson', 'dfdf', '1270974101', 'fa' at line 1 I printed out the query to see what could be the problem: INSERT INTO db.tablename ( FROM, SUBJECT, DATE, READ, MAIL ) VALUES ( 'EJackson', 'dfdf', '1270974299', 'false', 'dfdsfdsfd' ) and finaly the structure consists of: CREATE TABLE db.tablename ( `ID` int(12) NOT NULL auto_increment, `FROM` varchar(255) NOT NULL, `SUBJECT` varchar(255) NOT NULL, `DATE` varchar(255) NOT NULL, `READ` varchar(255) NOT NULL, `MAIL` varchar(255) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; I cant find anything wrong. Any Help would be much appreciated. ( db.tablename is a replacement for the actual tablename )

    Read the article

  • MySQL Trigger with dynamic table name

    - by Thomas
    I've look around a bit and can't quite find an answer to my problem: I want a trigger to execute after an insert on a table and to take that data that is being inserted and do two things Create a new table from the client id and partner id Insert the 'data' that just was inserted into the new table I am fairly new to the Stored procedures and triggers so I came up with this but am having difficulty debugging it: delimiter $$ CREATE TRIGGER trg_creas_insert BEFORE INSERT ON tracking.creas for each row BEGIN DECLARE @tableName varchar(40); DECLARE @createStmnt mediumtext; SET @tableName = concat('crea_','_', NEW.idClient_crea,'_',NEW.idPartenaire_crea); SET @createStmnt = concat('CREATE TABLE IF NOT EXISTS', @tableName, '( `data_crea` mediumtext NOT NULL ) ENGINE=MyISAM AUTO_INCREMENT=29483330 DEFAULT CHARSET=utf8 PACK_KEYS=0'); PREPARE stmt FROM @createStmnt; EXECUTE stmt; INSERT INTO @tableName (data_crea) values (NEW.data_crea); END$$ delimiter ; Thoughts?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >