Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 409/1121 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Beware of SQL Server and PerfMon differences in disk latency calculation

    - by Michael Zilberstein
    Recently sp_blitz procedure on one of my OLTP servers returned alarming notification about high latency on one of the disks (more than 100ms per IO). Our chief storage guy didn’t understand what I was talking about – according to his measures, average latency is only about 15ms. In order to investigate the issue, I’ve recorded 2 snapshots of sys.dm_io_virtual_file_stats and calculated latency per read and write separately. Results appeared to be even more alarming: while for read average latency...(read more)

    Read the article

  • SolidQ Journal - free SQL goodness for February

    - by Greg Low
    The SolidQ Journal for February just made it out by the end of February 28th. But again, it's great to see the content appearing. I've included the second part of the article on controlling the execution context of stored procedures. The first part was in December. Also this month, along with Fernando Guerrero's editorial, Analysis Services guru Craig Utley has written about aggregations, Herbert Albert and Gianluca Holz have continued their double-act and described how to automate database migrations,...(read more)

    Read the article

  • Find Duplicate Fields in a Table

    - by Derek Dieter
    A common scenario when querying tables is the need to find duplicate fields within the same table. To do this is simple, it requires utilizing the GROUP BY clause and counting the number of recurrences. For example, lets take a customers table. Within the customers table, we want to find all the [...]

    Read the article

  • StreamInsight is in all editions (except express)

    - by simonsabin
    Contrary to many posts and even press releases from Microsoft StreamInsight is not just for Data Center edition. It is available in all paid for editions. If you read the license terms http://go.microsoft.com/fwlink/?LinkID=186261&clcid=0x409 you will see you get StreamInsight in all paid editions. Whats confusing is the performance/limitations in each edition. The only reference I could find of these limitations is here http://blogs.msdn.com/b/streaminsight/archive/2010/02/10/streaminsight-versions...(read more)

    Read the article

  • StreamInsight on the Brain - can you help?

    - by sqlartist
    I just came across this guy who is once again in the news as the world's first cyborg. I read all about this research some years back when he implanted a chip into his arm to allow him to open doors in his research lab. Now, without really advancing the research he is claiming that a virus could be implanted onto these implanted devices. Captain Cyborg sidekick implants virus-infected chip - http://www.theregister.co.uk/2010/05/26/captain_cyborg_cyberfud/ This is of interest to me as I actually...(read more)

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Data Education: Great Classes Coming to a City Near You

    - by Adam Machanic
    In case you haven't noticed, Data Education (the training company I started a couple of years ago) has expanded beyond the US northeast; we're currently offering courses with top trainers in both St. Louis and Chicago , as well as the Boston area. The courses are starting to fill up fast—not surprising when you consider we’re talking about experienced instructors like Kalen Delaney , Rob Farley , and Allan Hirt —but we have still have some room. We’re very excited about bringing the highest quality...(read more)

    Read the article

  • Reproducing a Conversion Deadlock

    - by Alexander Kuznetsov
    Even if two processes compete on only one resource, they still can embrace in a deadlock. The following scripts reproduce such a scenario. In one tab, run this: CREATE TABLE dbo.Test ( i INT ) ; GO INSERT INTO dbo.Test ( i ) VALUES ( 1 ) ; GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ; BEGIN TRAN SELECT i FROM dbo.Test ; --UPDATE dbo.Test SET i=2 ; After this script has completed, we have an outstanding transaction holding a shared lock. In another tab, let us have that another connection have...(read more)

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Dynamic Number Table

    - by Derek D.
    Using a numbers table is helpful for many things. Like finding gaps in a supposed sequence of primary keys, or generating date ranges or any numerical range. In some cases, you will be in a production system that does not already contain a numbers table and you will also be unable to add [...]

    Read the article

  • Slow in the Application but Fast in SQL Server Management Studio - from Erland

    - by Greg Low
    Our MVP buddy Erland Sommarskog doesn't post articles that often but when he does, you should read them. His latest post is here: http://www.sommarskog.se/query-plan-mysteries.html It talks about why a query might be slow when sent from an application but fast when you execute it in SSMS. But it covers way more than that. There is a great deal of good info on how queries are executed and query plans generated. Highly recommended!...(read more)

    Read the article

  • Create an SQL Express 2008 database in C# code, but login fails when trying to connect with a sysadm

    - by Andrés Gonzales
    I have a piece of code that creates an SQL Server Express 2008 in runtime, and then tries to connect to it to execute a database initialization script in Transact-SQL. The code that creates the database is the following: private void CreateDatabase() { using (var connection = new SqlConnection( "Data Source=.\\sqlexpress;Initial Catalog=master;" + "Integrated Security=true;User Instance=True;")) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = "CREATE DATABASE " + m_databaseFilename + " ON PRIMARY (NAME=" + m_databaseFilename + ", FILENAME='" + this.m_basePath + m_databaseFilename + ".mdf')"; command.ExecuteNonQuery(); } } } The database is created successfully. After that, I try to connect to the database to run the initialization script, by using the following code: private void ExecuteQueryFromFile(string filename) { string queryContent = File.ReadAllText(m_filePath + filename); this.m_connectionString = string.Format( @"Server=.\SQLExpress; Integrated Security=true;Initial Catalog={0};", m_databaseFilename); using (var connection = new SqlConnection(m_connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = queryContent; command.CommandTimeout = 0; command.ExecuteNonQuery(); } } } However, the connection.Open() statement fails, throwing the following exception: Cannot open database "TestData" requested by the login. The login failed. Login failed for user 'MYDOMAIN\myusername'. I am completely puzzled by this error because the account I am trying to connect with has sysadmin privileges, which should allow me to connect any database (notice that I use a connection to the master database to create the database in the first place).

    Read the article

  • How to calculate the covariance in T-SQL

    - by Peter Larsson
    DECLARE @Sample TABLE         (             x INT NOT NULL,             y INT NOT NULL         ) INSERT  @Sample VALUES  (3, 9),         (2, 7),         (4, 12),         (5, 15),         (6, 17) ;WITH cteSource(x, xAvg, y, yAvg, n) AS (         SELECT  1E * x,                 AVG(1E * x) OVER (PARTITION BY (SELECT NULL)),                 1E * y,                 AVG(1E * y) OVER (PARTITION BY (SELECT NULL)),                 COUNT(*) OVER (PARTITION BY (SELECT NULL))         FROM    @Sample ) SELECT  SUM((x - xAvg) *(y - yAvg)) / MAX(n) AS [COVAR(x,y)] FROM    cteSource

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned. for C# windows

    - by jello
    yeah so I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • SQL Is it possible to setup a column that will contain a value dependent on another column?

    - by Wesley
    I have a table (A) that lists all bundles created off a machine in a day. It lists the date created and the weight of the bundle. I have an ID column, a date column, and a weight column. I also have a table (B) that holds the details related to that machine for the day. In that table (B), I want a column that lists a sum of weights from the other table (A) that the dates match on. So if the machine runs 30 bundles in a day, I'll have 30 rows in table (A) all dated the same day. In table (B) I'll have 1 row detailing other information about the machine for the day plus the column that holds the total bundle weight created for the day. Is there a way to make the total column in table (B) automatically adjust itself whenever a row is added to table (A)? Is this possible to do in the table schema itself rather than in an SQL statement each time a bundle is added? If it's not, what sort of SQL statement do I need? Wes

    Read the article

  • SQL Server v.Next (Denali) : Breaking change to fn_virtualfilestats

    - by AaronBertrand
    Yesterday I posted a general warning about changes to Denali that will potentially break your existing code base, with a strong suggestion to grab the summer CTP as soon as it is available and start testing. I posted an example of a breaking change that will not be documented since it affects a commonly-used but undocumented DBCC command (DBCC LOGINFO), and also mentioned a couple of other changes in passing (). Today it occurred to me that it may be more useful if, when I come across a potential...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Server 2008 R2: StreamInsight - User-defined aggregates

    - by Greg Low
    I'd briefly played around with user-defined aggregates in StreamInsight with CTP3 but when I started working with the new Count Windows, I found I had to have one working. I learned a few things along the way that I hope will help someone. The first thing you have to do is define a class: public class IntegerAverage : CepAggregate < int , int > { public override int GenerateOutput( IEnumerable < int > eventData) { if (eventData.Count() == 0) { return 0; } else { return eventData.Sum()...(read more)

    Read the article

  • Approaching events #mstc11 #ppws #sqlbits

    - by Marco Russo (SQLBI)
    The spring season is always full of events and I’m just preparing for a number of them. First of all, we are getting very good interest for the PowerPivot Workshop in Copenhagen on 21-22 March 2011. Tomorrow (Friday March 4) will be the last day to take advantage of the Early Bird rate for this date. We will also participate to an evening meeting of local user groups on March 21 in Copenhagen, more news about this in the next few days. Other scheduled dates are in Dublin (28-29 March 2011) and in...(read more)

    Read the article

  • SQL: How do I INSERT primary key values from two tables INTO a master table.

    - by Stefan
    Hello, I would appreciate some help with an SQL statement I really can't get my head around. What I want to do is fairly simple, I need to take the values from two different tables and copy them into an master table when a new row is inserted into one of the two tables. The problem is perhaps best explained like this: I have three tables, productcategories, regioncategories and mastertable. --------------------------- TABLE: PRODUCTCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: BOOKS | Books --------------------------- --------------------------- TABLE: REGIONCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: EU | European Union --------------------------- --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- I want the values to be inserted like this when a new row is created in either productcategories or regioncategories. New row is created. --------------------------- TABLE: PRODUCTCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: BOOKS | Books --------------------------- VALUES: DVD | DVDs --------------------------- And a SQL statement copies the new values into the mastertable. --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- VALUES: EU | DVD --------------------------- The same goes if a row is created in the regioncategories. New row. --------------------------- TABLE: REGIONCATEGORIES --------------------------- COLUMNS: CODE | DESCRIPTION --------------------------- VALUES: EU | European Union --------------------------- VALUES: US | United States --------------------------- Copied to the mastertable. --------------------------- TABLE: MASTERTABLE --------------------------- COLUMNS: REGION | PRODUCT --------------------------- VALUES: EU | BOOKS --------------------------- VALUES: EU | DVD --------------------------- VALUES: US | BOOKS --------------------------- VALUES: US | DVD --------------------------- I hope it makes sense. Thanks, Stefan

    Read the article

  • Powershell, SMO and Database Files

    - by dbaduck
    In response to some questions about renaming a physical file for a database, I have 2 versions of Powershell scripts that do this for you, including taking the database offline and then online to make the physical change match the meta-data. First, there is an article about this at http://msdn.microsoft.com/en-us/library/ms345483.aspx . This explains that you start by setting the database offline, then alter the database and modify the filename then set it back online. This particular article does...(read more)

    Read the article

  • How to represent and insert into an ordered list in SQL?

    - by Travis
    I want to represent the list "hi", "hello", "goodbye", "good day", "howdy" (with that order), in a SQL table: pk | i | val ------------ 1 | 0 | hi 0 | 2 | hello 2 | 3 | goodbye 3 | 4 | good day 5 | 6 | howdy 'pk' is the primary key column. Disregard its values. 'i' is the "index" that defines that order of the values in the 'val' column. It is only used to establish the order and the values are otherwise unimportant. The problem I'm having is with inserting values into the list while maintaining the order. For example, if I want to insert "hey" and I want it to appear between "hello" and "goodbye", then I have to shift the 'i' values of "goodbye" and "good day" (but preferably not "howdy") to make room for the new entry. So, is there a standard SQL pattern to do the shift operation, but only shift the elements that are necessary? (Note that a simple "UPDATE table SET i=i+1 WHERE i=3" doesn't work, because it violates the uniqueness constraint on 'i', and also it updates the "howdy" row unnecessarily.) Or, is there a better way to represent the ordered list? I suppose you could make 'i' a floating point value and choose values between, but then you have to have a separate rebalancing operation when no such value exists. Or, is there some standard algorithm for generating string values between arbitrary other strings, if I were to make 'i' a varchar? Or should I just represent it as a linked list? I was avoiding that because I'd like to also be able to do a SELECT .. ORDER BY to get all the elements in order.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >