Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 84/443 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • fill combobox value in datagridview based on other combobox in datagridview

    - by Purohit Raghu
    I m creating Web Application..in C# I have One Data grid view In that i Have 2 Combo box i m trying 2 bind second combo box based on first combo-box. in First Combo-box I have value Shirt,T shirt so i want if shirt is selected than second combo-box should have value Slim,Regular..and if T shirt Is selected than second Combo-box should have V neck and rounded color. i have different table for thirst and shirt type... I work fine for first time but when i goes on second row and change the combo box value than it will also change value of second combo box value of previous row as well .. Where i need to change to prevent change in upper row i have following code private void dataGridView1_EditingControlShowing(object sender, DataGridViewEditingControlShowingEventArgs e) { if (dataGridView1.CurrentCell.ColumnIndex == 0) { ComboBox cbx = e.Control as ComboBox; cbx.SelectionChangeCommitted -= new EventHandler(cbx_SelectionChangeCommitted); cbx.SelectionChangeCommitted += new EventHandler(cbx_SelectionChangeCommitted); } } private void cbx_SelectionChangeCommitted(object sender, EventArgs e) { int selectedIndex = ((ComboBox)sender).SelectedIndex; if (selectedIndex == 1) //this condition is true if i have selected shirt from combobox1 { ShirtType(); } if (selectedIndex == 2) { Tshirtype(); } } void ShirtType() { try { string str; str = "select ShirtType_name,ShirtType_Id from ShirtType_master"; ds = new DataSet(); ds = cn.readdata(str); Type.DataSource = ds.Tables[0];//Type is Combobox name of Second Combobox Type.DisplayMember = ds.Tables[0].Columns[0].ToString(); ; Type.ValueMember = ds.Tables[0].Columns[1].ToString(); ; } catch (Exception ee) { } } void Tshirtype() { try { string str; str = "select TShirtType_name,TshirtType_Id from TshirtType_Master"; ds = new DataSet(); ds = cn.readdata(str); Type.DataSource = ds.Tables[0]; Type.DisplayMember = ds.Tables[0].Columns[0].ToString(); ; Type.ValueMember = ds.Tables[0].Columns[1].ToString(); ; }

    Read the article

  • InvalidArgument=Value of '3' is not valid for 'rowIndex'

    - by Devendra Dwivedi
    Hi all This is my code It is working for if 1 terminal that is having 3 services but it is not working for more than 3 services when I do then I have got following error message: InvalidArgument=Value of '3' is not valid for 'rowIndex' I have so tired to find this problem but couldn't get any solutions. Anybody please help me. MySqlCommand command = new MySqlCommand("VTerminalsLoad");//Procedure MySqlDataAdapter terminalAdapter = this.Database.ExecuteCommand(command); terminalAdapter.Fill(dataSet, "Terminals"); command = new MySqlCommand("VTServicesLoad");//Procedure command.Parameters.Add(new MySqlParameter("pVesselID", 1)); MySqlDataAdapter serviceAdapter = this.Database.ExecuteCommand(command);//Return Adaptor serviceAdapter.Fill(dataSet, "Services"); DataColumn[] parentColumns = { dataSet.Tables[0].Columns["SerialNo"], dataSet.Tables[0].Columns["VesselID"], dataSet.Tables[0].Columns["TerminalID"] }; DataColumn[] childColumns = { dataSet.Tables[1].Columns["SerialNo"], dataSet.Tables[1].Columns["VesselID"], dataSet.Tables[1].Columns["TerminalID"] }; DataRelation relationTS = new DataRelation("TerminalsServices", parentColumns, childColumns); dataSet.Relations.Add(relationTS); //Parent Table ListTerminal.DataSource = dataSet; //ListTerminal Parent datagridview ListTerminal.DataMember = "Terminals"; //Child Table ListServices.DataSource = dataSet;// ListServices Child datagridview ListServices.DataMember = "Terminals.TerminalsServices";

    Read the article

  • SQL Server SELECT stored procedure according to combobox.selectedvalue

    - by Jay
    In order to fill a datagridview according to the selectedvalue of a combobox I've tried creating a stored procedure. However, as I'm not 100% sure what I'm doing, depending on the WHERE statement at the end of my stored procedure, it either returns everything within the table or nothing at all. This is what's in my class: Public Function GetAankoopDetails(ByRef DisplayMember As String, ByRef ValueMember As String) As DataTable DisplayMember = "AankoopDetailsID" ValueMember = "AankoopDetailsID" If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function Public Function GetAankoopDetails() As DataTable If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function This is the function in the code behind the form I've written in order to fill the datagridview: Private Sub GridAankoopDetails_Fill() Try Me.Cursor = Cursors.WaitCursor dgvAankoopDetails.DataSource = Nothing _clsAankoopDetails.AankoopDetailsID = cboKeuze.SelectedValue dgvAankoopDetails.DataSource = _clsAankoopDetails.GetAankoopDetails Catch ex As Exception MessageBox.Show("An error occurred while trying to fill the data grid: " & ex.Message, "Oops!", MessageBoxButtons.OK) Finally Me.Cursor = Cursors.Default End Try End Sub And finally, this is my stored procedure: (do note that I'm not sure what I'm doing here) USE [Budget] GO /****** Object: StoredProcedure [dbo].[S_AankoopDetails] Script Date: 04/12/2010 03:10:52 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[S_AankoopDetails] ( @AankoopID int, @ReturnValue int output ) AS declare @Value int set @Value =@@rowcount if @Value = 0 begin SELECT dbo.tblAankoopDetails.AankoopDetailsID, dbo.tblAankoopDetails.AankoopID, dbo.tblAankoopDetails.ArtikelID, dbo.tblAankoopDetails.Aantal, dbo.tblAankoopDetails.Prijs, dbo.tblAankoopDetails.Korting, dbo.tblAankoopDetails.SoortKorting, dbo.tblAankoopDetails.UitgavenDeelGroepID FROM dbo.tblAankoopDetails INNER JOIN dbo.tblAankoop ON dbo.tblAankoopDetails.AankoopID = dbo.tblAankoop.AankoopID INNER JOIN dbo.tblArtikel ON dbo.tblAankoopDetails.ArtikelID = dbo.tblArtikel.ArtikelID INNER JOIN dbo.tblUitgavenDeelGroep ON dbo.tblAankoopDetails.UitgavenDeelGroepID = dbo.tblUitgavenDeelGroep.UitgavenDeelGroepID WHERE dbo.tblAankoopDetails.Deleted = 0 and dbo.tblAankoopDetails.AankoopID = @AankoopID ORDER BY AankoopID if @@rowcount >0 begin set @ReturnValue=999 end else begin set @ReturnValue=997 end end if @Value >0 begin --Dit wil zeggen dat ik een gebruiker wil ingeven die reeds bestaat. (998) set @ReturnValue=998 end Does anyone know what I need to do to resolve this?

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • How can I synchronise two datatables and update the target in the database?

    - by Craig
    I am trying to synchronise two tables between two databases. I thought that the best way to do this would be to use the DataTable.Merge method. This seems to pick up the changes, but nothing ever gets committed to the database. So far I have: string tableName = "[Table_1]"; string sql = "select * from " + tableName; using (SqlConnection sourceConn = new SqlConnection(ConfigurationManager.ConnectionStrings["source"].ConnectionString)) { SqlDataAdapter sourceAdapter = new SqlDataAdapter(); sourceAdapter.SelectCommand = new SqlCommand(sql, sourceConn); sourceConn.Open(); DataSet sourceDs = new DataSet(); sourceAdapter.Fill(sourceDs); using (SqlConnection targetConn = new SqlConnection(ConfigurationManager.ConnectionStrings["target"].ConnectionString)) { SqlDataAdapter targetAdapter = new SqlDataAdapter(); targetAdapter.SelectCommand = new SqlCommand(sql, targetConn); SqlCommandBuilder builder = new SqlCommandBuilder(targetAdapter); targetAdapter.InsertCommand = builder.GetInsertCommand(); targetAdapter.UpdateCommand = builder.GetUpdateCommand(); targetAdapter.DeleteCommand = builder.GetDeleteCommand(); targetConn.Open(); DataSet targetDs = new DataSet(); targetAdapter.Fill(targetDs); targetDs.Tables[0].TableName = tableName; sourceDs.Tables[0].TableName = tableName; targetDs.Tables[0].Merge(sourceDs.Tables[0]); targetAdapter.Update(targetDs.Tables[0]); } } At the present time, there is one row in the source that is not in the target. This row is never transferred. I have also tried it with an empty target, and nothing is transferred.

    Read the article

  • VB dataset issue

    - by Gabriel
    Hi. The idea was to create a message box that stores my user name, message, and post datetime into the database as messages are sent. Soon came to realise, what if the user changed his name? So I decided to use the user id (icn) to identify the message poster instead. However, my chunk of codes keep giving me the same error. Says that there are no rows in the dataset ds2. I've tried my Query on my SQL and it works perfectly so I really really need help to spot the error in my chunk of codes here. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim name As String Dim icn As String Dim message As String Dim time As String Dim tags As String = "" Dim strConn As System.Configuration.ConnectionStringSettings strConn = ConfigurationManager.ConnectionStrings("ufadb") Dim conn As SqlConnection = New SqlConnection(strConn.ToString()) Dim cmd As New SqlCommand("Select * From Message", conn) Dim daMessages As SqlDataAdapter = New SqlDataAdapter(cmd) Dim ds As New DataSet cmd.Connection.Open() daMessages.Fill(ds, "Messages") cmd.Connection.Close() If ds.Tables("Messages").Rows.Count > 0 Then Dim n As Integer = ds.Tables("Messages").Rows.Count Dim i As Integer For i = 0 To n - 1 icn = ds.Tables("Messages").Rows(i).Item("icn") Dim cmd2 As New SqlCommand("SELECT name FROM Member inner join Message ON Member.icn = Message.icn WHERE message.icn = @icn", conn) cmd2.Parameters.AddWithValue("@icn", icn) Dim daName As SqlDataAdapter = New SqlDataAdapter(cmd2) Dim ds2 As New DataSet cmd2.Connection.Open() daName.Fill(ds2, "PosterName") cmd2.Connection.Close() name = ds2.Tables("PosterName").Rows(0).Item("name") message = ds.Tables("Messages").Rows(i).Item("message") time = ds.Tables("Messages").Rows(i).Item("timePosted") tags = time + vbCrLf + name + ": " + vbCrLf + message + vbCrLf + tags Next txtBoard.Text = tags Else txtBoard.Text = "nothing to display" End If End Sub Help will be very much appreciated as I have been on this simple problem for 2 days.

    Read the article

  • php and mysql listing databases and looping through results

    - by Jacksta
    Beginner help needed :) I am doign an example form a php book which lists tables in databases. I am getting an error on line 36: $db_list .= "$table_list"; <?php //connect to database $connection = mysql_connect("localhost", "admin_cantsayno", "cantsayno") or die(mysql_error()); //list databases $dbs = @mysql_list_dbs($connection) or die(mysql_error()); //start first bullet list $db_list = "<ul>"; $db_num = 0; //loop through results of functions while ($db_num < mysql_num_rows($dbs)) { //get database names and make each a list point $db_names[$db_num] = mysql_tablename($dbs, $db_num); $db_list .= "<li>$db_names[$db_num]"; //get table names and make another list $tables = @mysql_list_tables($db_names[$db_num]) or die(mysql_error()); $table_list = "<ul>"; $table_num = 0; //loop through results of function while ($table_num < mysql_num_rows($tables)){ //get table names and make each bullet point $table_names[$table_num] = mysql_tablename($tables, $table_num); $table_list .= "<li>$table_names[$table_num]"; $table_num++; } //close inner bullet list and increment number to continue $table_list .= "</ul>" $db_list .= "$table_list"; $db_num++; } //close outer bullet list $db_list .= "</ul>"; ?> <html> <head> <title>MySQL Tables</title> </head> <body> <p><strong>Data bases and tables on local host</strong></p> <? echo "$db_list"; ?> </body>

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • CSS Table Formatting to a HTML Table

    - by Rurigok
    I am attempting to provide CSS formating to two HTML tables, but I cannot. I am setting up a webpage in HTML & CSS (with the CSS in an external sheet) and the layout of the website depends on the tables. There are 2 tables, one for the head and another for the body. They are set up whereas content is situated in one middle column of 60% width, with one column on each side of the center with 20% width each, along with other table formatting. My question is - how can I format the tables in CSS? I successfully formatted them in HTML, but this will not do. This is the CSS code for the tables - each table has the id layouttable: #layouttable{border:0px;width:100%;} #layouttable td{width:20%;vertical-align:top;} #layouttable td{width:60%;vertical-align:top;background-color:#E8E8E8;} #layouttable td{width:20%;vertical-align:top;} The tables in the html document both each have, in respective order, these elements (with content inside not shown): <table id="layouttable"><tr><td></td><td></td><td></td></tr></table> Does anyone have any idea why this CSS is not working, or can write some code to fix it? If further explanation is needed, please, ask.

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • How to get an array to check in the in_array function? See description Below...

    - by Parth
    I am fetching the data using mysql_fetch_array like : while($row = mysql_fetch_array($select)) { $tables[] =$row; } Now i need this $tables array as one dimensional array only, so that i can use it in if(in_array($val['table_name'],$tables)) { //Some Code } to check for whether the $val['table_name'] is in the $tables or not.. As for now I am getting $tables array as Array ( [0] => Array ( [TABLE_NAME] => jos_audittrail ) [1] => Array ( [TABLE_NAME] => jos_banner ) [2] => Array ( [TABLE_NAME] => jos_bannerclient ) .. .. .. .. } But I need the $tables is form of.. Array ( [0] => jos_audittrail [TABLE_NAME] => jos_audittrail [1] => jos_banner [TABLE_NAME] => jos_banner [2] => jos_bannerclient [TABLE_NAME] => jos_bannerclient .. .. .. } How can i get the above array after applying "while loop" to "$row"?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • SQL inner join from field defined table?

    - by Wolftousen
    I have a, currently, a total of 6 tables that are part of this question. The primary table, tableA, contains columns that all the entries in the other 5 tables have in common. The other 5 tables have columns which define the entry in tableA in more detail. For example: TableA ID|Name|Volumn|Weight|Description 0 |T1 |0.4 |0.1 |Random text 1 |R1 |5.3 |25 |Random text TableB ID|Color|Shape 0 |Blue |Sphere TableC ID|Direction|Velocity 1 |North |3.4 (column names are just examples don't take them for what they mean...) The ID field in Table A is unique to all other tables (i.e. TableB will have 0, but TableC will not, nor any other Tables). What I would like to do is select all the fields from TableA and the corresponding (according to ID field) detail Table (TableB-F). What I have currently done and not tested is added a field to TableA so it looks like this: TableA ID|Name|Volumn|Weight|Description|Table 0 |T1 |0.4 |0.1 |Random text|TableB 1 |R1 |5.3 |25 |Random text|TableC I have a few questions about this: 1.Is it proper to do such a thing to TableA, as foreign keys wont work in this situation since they all need to link to different tables? 2.If this is proper, would the SQL query look like this (ID would be input by the user)? SELECT * FROM TableA AS a INNER JOIN a.Table AS t ON a.ID = ID; 3.Is there a better way to do this? Thanks for the help.

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • mysql server upgrade problem from 5.0 to 5.1

    - by Avinash
    Hi I have upgraded my mysql server from 5.0 to 5.1. But i am having a problem related to tables for InnoDB storage Engine. My default engine is InnoDB, So it is enabled in my server. But tables with InneDB engine are not displaying in phpmyadmin. Tables with MyISAM are displaying properly. and also i can't fire a query on the table with InnoDB Engine. Thanks Avinash

    Read the article

  • Is Internet routing (BGP) fully automated?

    - by Adal
    If all the routing tables on the Internet would be erased simultaneously, will the routers be able to rediscover them automatically? I'm having an argument with a colleague who says that the RIPE routing tables are essential, but I remember reading that if the tables disappeared, the BGP protocol will allow routers to rediscover working routes between nodes by querying their neighbors which in turn will query their neighbors until a working route will be detected. Then that route will be used to repopulate the routing tables. After a while, all the routes will be restored (not necessarily the optimal routes). Is that correct?

    Read the article

  • How can I force a MySQL table to become corrupted?

    - by Rory McCann
    I have written a simple Nagios plugin that calls mysqlcheck (which checks for corrupted tables) and will give a warning if any are corrupt. However none of my tables are corrupt now. So I'm not 100% sure that my plugin is working fine. I have a dev server that's not misson critical. How can I force one (or any) of the tables there to be corrupt so that I can test my nagios alert? For the record the server is Ubuntu Dapper and the mysql is version 5.0

    Read the article

  • Match Hard Disk Partition Table?

    - by MA1
    What is the most efficient way to match the partition tables on two different hard disks? I have saved the partition tables using dd command in linux. The partition tables are from a Windows system.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >