Search Results

Search found 85825 results on 3433 pages for 'sql data services'.

Page 13/3433 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Finding Shortest Distance between Two Shapes using Spatial Data Classes – Ramsetu or Adam’s Bridge

    - by pinaldave
    Recently I was reading excellent blog post by Lenni Lobel on Spatial Database. He has written very interesting function ShortestLineTo in Spatial Data Classes. I really loved this new feature of the finding shortest distance between two shapes in SQL Server. Following is the example which is same as Lenni talk on his blog article . DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -26, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO When you run this script SQL Server finds out the shortest distance between two shapes and draws the line. We are using STBuffer so we can see the connecting line clearly. Now let us modify one of the object and then we see how the connecting shortest line works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO Now once again let us modify one of the script and see how the shortest line to works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -18, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) SELECT @Shape1.STDistance(@Shape2) GO You can see as the objects are changing the shortest lines are moving at appropriate place. I think even though this is very small feature this is really cool know. While I was working on this example, I suddenly thought about distance between Sri Lanka and India. The distance is very short infect it is less than 30 km by sea. I decided to map India and Sri Lanka using spatial data classes. To my surprise the plotted shortest line is the same as Adam’s Bridge or Ramsetu. Adam’s Bridge starts as chain of shoals from the Dhanushkodi tip of India’s Pamban Island and ends at Sri Lanka’s Mannar Island. Geological evidence suggests that this bridge is a former land connection between India and Sri Lanka. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • SQL SERVER – Importance of User Without Login – T-SQL Demo Script

    - by pinaldave
    Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up USE [AdventureWorks2012] GO -- Step 1 : Login using the SA -- Step 2 : Create Login Less User CREATE USER [testguest] 9ITHOUT LOGIN WITH DEFAULT_SCHEMA=[dbo] GO -- Step 3 : Checking access to Tables SELECT * FROM sys.tables; -- Step 4 : Changing the execution contest EXECUTE AS USER   = 'testguest'; GO -- Step 5 : Checking access to Tables SELECT * FROM sys.tables; GO -- Step 6 : Reverting Permissions REVERT; -- Step 7 : Giving more Permissions to testguest user GRANT SELECT ON [dbo].[ErrorLog] TO [testguest]; GRANT SELECT ON [dbo].[DatabaseLog] TO [testguest]; GO -- Step 8 : Changing the execution contest EXECUTE AS USER   = 'testguest'; GO -- Step 9 : Checking access to Tables SELECT * FROM sys.tables; GO -- Step 10 : Reverting Permissions REVERT; GO -- Step 11: Clean up DROP USER [testguest]Step 3 GO Here is the step 9 we will be able to notice that how a user without login gets access to some of the data/object which we gave permission. What I am going to prove with this example? Well there can be different rights with different account. Once the login is authenticated it makes sense for impersonating a user with only necessary permissions to be used for further operation. Again this is very basic and fundamental example. There are lots of more points to be discussed as we go in future posts. Just do not take this blog post as a template and implement everything as it is. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Monday Morning Puzzle – Query Returns Results Sometimes but Not Always

    - by pinaldave
    The amount of email I receive sometime it is impossible for me to answer every email. Nonetheless I try to answer pretty much every email I receive. However, quite often I receive such questions in email that I have no answer to them because either emails are not complete or they are out of my domain expertise. In recent times I received one email which had only one or two lines but indeed attracted my attention to it. The question was bit vague but it indeed made me think. The answer was not straightforward so I had to keep on writing the answer as I remember it. However, after writing the answer I do not feel satisfied. Let me put this question in front of you and see if we all can come up with a comprehensive answer. Question: I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Well, if you read the question, it is indeed incomplete and it does not contain much of the information at all. I decided to help him and here is the answer, which I started to compose. Answer: As there are not much information in the original question, I am not confident what will solve your problem. However, here are the few things which you can try to look at and see if that solves your problem. Check parameter which is passed to the query. Is the parameter changing at various executions? Check connection string – is there some kind of logic around it? Do you have a non-deterministic component in your query logic? (In other words – does your result is based on current date time or any other time based function?) Are you facing time out while running your query? Is there any error in error log? What is the business logic in your query? Do you have all the valid permissions to all the objects used in the query? Are permissions changing or query accessing a different object in various executions? (Add your suggestions here) Meanwhile, have you ever faced this situation? If yes, do share your experience in the comment area. I will send a copy of my book SQL Server Interview Questions and Answers to one of the most interesting comment. The winner will be announced by next Monday.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    I receive a lot of emails every day. I try to answer each and every email and comments on Facebook and Twitter. I prefer communication on social media as this gives opportunities to others to read the questions and participate along with me. There is always some question which everyone likes to read and remember. Here is one of the questions which I received in email. I believe the same question will be there any many developers who are beginning with SQL Server. I decided to blog about it so everyone can read it and participate. “I am beginner in SQL Server. I have a very interesting situation and need your help. I am beginner to SQL Server and that is why I do not have access to the production server and I work entirely on the development server. The project I am working on is also in the infant stage as well. In product I had to create a multiple tables and every table had few columns. Later on I have written Stored Procedures using those tables. During a code review my manager has requested to change one of the column which I have used in the table. As per him the naming convention was not accurate. Now changing the columname in the table is not a big issue. I figured out that I can do it very quickly either using T-SQL script or SQL Server Management Studio. The real problem is that I have used this column in nearly 50+ stored procedure. This looks like a very mechanical task. I believe I can go and change it in nearly 50+ stored procedure but is there a better solution I can use. Someone suggested that I should just go ahead and find the text in system table and update it there. Is that safe solution? If not, what is your solution. In simple words, How to replace a column name in multiple stored procedure efficiently and quickly? Please help me here with keeping my experience and non-production server in mind.” Well, I found this question very interesting. Honestly I would have preferred if this question was asked on my social media handles (Facebook and Twitter) as I am very active there and quite often before I reach there other experts have already answered this question. Anyway I am now answering the same question on the blog so all of us can participate here and come up with an appropriate answer. Here is my answer - “My Friend, I do not advice to touch system table. Please do not go that route. It can be dangerous and not appropriate. The issue which you faced today is what I used to face in early career as well I still face it often. There are two sets of argument I have observed – there are people who see no value in the name of the object and name objects like obj1, obj2 etc. There are sets of people who carefully chose the name of the object where object name is self-explanatory and almost tells a story. I am not here to take any side in this blog post – so let me go to a quick solution for your problem. Note: Following should not be directly practiced on Production Server. It should be properly tested on development server and once it is validated they should be pushed to your production server with your existing deployment practice. The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server. Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now… Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard. Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box. Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one. Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “ Well, this was my answer to the question which I have received. Do you see any other workaround or solution? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Create Primary Key with Specific Name when Creating Table

    - by pinaldave
    It is interesting how sometimes the documentation of simple concepts is not available online. I had received email from one of the reader where he has asked how to create Primary key with a specific name when creating the table itself. He said, he knows the method where he can create the table and then apply the primary key with specific name. The attached code was as follows: CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL) GO ALTER TABLE [dbo].[TestTable] ADD  CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) GO He wanted to know if we can create Primary Key as part of the table name as well, and also give it a name at the same time. Though it would look very normal to all experienced developers, it can be still confusing to many. Here is the quick code that functions as the above code in one single statement. CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Difference Between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE

    - by pinaldave
    Today, we are going to discuss about something very simple, but quite commonly confused two options of ALTER DATABASE. The first one is ALTER DATABASE …ROLLBACK IMMEDIATE and the second one is WITH NO_WAIT. Many people think they are the same or are not sure of the difference between these two options. Before we continue our explaination, let us go through the explanation given by Book On Line. ROLLBACK AFTER integer [SECONDS] | ROLLBACK IMMEDIATE Specifies whether to roll back after a specified number of seconds or immediately. NO_WAIT Specifies that if the requested database state or option change cannot complete immediately without waiting for transactions to commit or roll back on their own, then the request will fail. If you have understood the difference by now, there is no need to proceed further. If you are still confused, continue with the rest of the post. There is one big difference between ROLLBACK and NO_WAIT. In case incomplete Transaction ALTER DATABASE … ROLLBACK rollbacks those incomplete transaction immediately, where as ALTER DATABASE … NO_WAIT will terminate and rollback the transaction of ALTER DATABASE … NO_WAIT itself. I think it can be clearly explained with the help of the following images. Option 1: ALTER DATABASE … ROLLBACK Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Option 2: ALTER DATABASE … NO_WAIT Connection 1 – Simulating some operation using WAITFOR DELAY WAITFOR DELAY '1:00:00' Connection 2 ALTER DATABASE TestDb SET SINGLE_USER WITH NO_WAIT; Let me know if this example was simple enough. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Copy Statistics from One Server to Another Server

    - by pinaldave
    I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and I could not recreate the scenarios as production without data. Well, I was confused but Ruben from Solid Quality Mentors, Spain taught me new tricks. He suggested that when table schema is generated, we can create the statistics consequently. Here is how we had done that: Once statistics is created along with the schema, without data in the table, all the queries will work as how they will work on production server. This way, without access to the data, we were able to recreate the same scenario as production server on development server. When observed at the script, you will find that the statistics were also generated along with the query. You will find statistics included in WITH STATS_STREAM clause. What a very simple and effective script. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Right Aligning Numerics in SQL Server Management Studio (SSMS)

    - by pinaldave
    SQL Server Management Studio is my most favorite tool and the comfort it provides to user is sometime very amazing. Recently I was retrieving numeric data in SSMS and I found it is very difficult to read them as they were all right aligned. Please pay attention to following image, you will notice that it is not easier to read the digits as we are used to read the numbers which are right aligned. I immediately thought before I go for any other tricks I should check the query properties. I right clicked on query properties and I found following option. I checked option Right align numeric values and it just worked fine. Do you have any other similar tricks which do you practice often. I prefer to also include column headers in the result set as it gives me proper perspective of which column I have selected. Sometime little tips like this helps a lot in productivity, I encourage you to share your tips. I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Function to Round Up Time to Nearest Minutes Interval

    - by pinaldave
    Though I have written more than 2300 blog posts, I always find things which I have not covered earlier in this blog post. Recently I was asked if I have written a function which rounds up or down the time based on the minute interval passed to it. Well, not earlier but it is here today. Here is a very simple example of how one can do the same. ALTER FUNCTION [dbo].[RoundTime] (@Time DATETIME, @RoundToMin INT) RETURNS DATETIME AS BEGIN RETURN ROUND(CAST(CAST(CONVERT(VARCHAR,@Time,121) AS DATETIME) AS FLOAT) * (1440/@RoundToMin),0)/(1440/@RoundToMin) END GO Above function needs two values. 1) The time which needs to be rounded up or down. 2) Time in minutes (the value passed here should be between 0 and 60 – if the value is incorrect the results will be incorrect.) Above function can be enhanced by adding functionalities like a) Validation of the parameters passed b) Accepting values like Quarter Hour, Half Hour etc. Here are few sample examples. SELECT dbo.roundtime1('17:29',30) SELECT dbo.roundtime1(GETDATE(),5) SELECT dbo.roundtime1('2012-11-02 07:27:07.000',15) When you run above code, it will return following results. Well, do you have any other way to achieve the same result? If yes, do share it here and I will be glad to share it on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index

    - by pinaldave
    This blog is in response to the ongoing communication with the reader who had earlier asked the question of SQL SERVER – Disable Clustered Index and Data Insert. The same reader has asked me the difference between ALTER INDEX ALL REBUILD and ALTER INDEX REBUILD along with disabled clustered index. Instead of writing a big theory, we will go over the demo right away. Here are the steps that we intend to follow. 1) Create Clustered and Nonclustered Index 2) Disable Clustered and Nonclustered Index 3) Enable – a) All Indexes, b) Clustered Index USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us disable both the indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Next, let us rebuild all the indexes and see the output. -- Test 1: ALTER INDEX ALL REBUILD -- Rebuliding should work fine ALTER INDEX ALL ON [dbo].[TableName] REBUILD GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now, once again disable indexes for the second test. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Next, let us build only the clustered index and see the output of all the indexes. -- Test 2: ALTER INDEX REBUILD -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that only clustered index is enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us do final clean up. -- Clean up DROP TABLE [TableName] GO From the example, it is very clear that if you have built only clustered index when the nonclustered index is disabled, it still remains disabled. Do let me know if the idea is clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Validating Spatial Object as NULL using IsNULL

    - by pinaldave
    Follow up questions are the most fun part of writing a blog post. Earlier I wrote about SQL SERVER – Validating Spatial Object with IsValidDetailed Function and today I received a follow up question on the same subject. The question was mainly about how NULL is handled by spatial functions. Well, NULL is NULL. It is very easy to work with NULL. There are two different ways to validate if the passed in the value is NULL or not. 1) Using IsNULL Function IsNULL function validates if the object is null or not, if object is not null it will return you value 0 and if object is NULL it will return you the value NULL. DECLARE @p GEOMETRY = 'Polygon((2 2, 3 3, 4 4, 5 5, 6 6, 2 2))' SELECT @p.ISNULL ObjIsNull GO DECLARE @p GEOMETRY = NULL SELECT @p.ISNULL ObjIsNull GO 2) Using IsValidDetailed Function IsValidateDetails function validates if the object is valid or not. If the object is valid it will return 24400: Valid but if the object is not valid it will give message with the error number. In case object is NULL it will return the value as NULL. DECLARE @p GEOMETRY = 'Polygon((2 2, 3 3, 4 4, 5 5, 6 6, 2 2))' SELECT @p.IsValidDetailed() IsValid GO DECLARE @p GEOMETRY = NULL SELECT @p.IsValidDetailed() IsValid GO When to use what? Now you can see that there are two different ways to validate the NULL values. I personally have no preference about using one over another. However, there is one clear difference between them. In case of the IsValidDetailed Function the return value is nvarchar(max) and it is not always possible to compare the value with nvarchar(max). Whereas the ISNULL function returns the bit value of 0 when the object is null and it is easy to determine if the object is null or not in the case of ISNULL function. Additionally, ISNULL function does not check if the object is valid or not and will return the value 0 if the object is not NULL. Now you know even though either of the function can be used in place of each other both have very specific use case. Use the one which fits your business case. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*)

    - by pinaldave
    Earlier I have published Puzzle Why SELECT * throws an error but SELECT COUNT(*) does not. This question have received many interesting comments. Let us go over few of the answers, which are valid. Before I start the same, let me acknowledge Rob Farley who has not only answered correctly very first but also started interesting conversation in the same thread. The usual question will be what is the right answer. I would like to point to official Microsoft Connect Items which discusses the same. RGarvao https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select tiberiu utan http://connect.microsoft.com/SQLServer/feedback/details/338532/count-returns-a-value-1 Rob Farley count(*) is about counting rows, not a particular column. It doesn’t even look to see what columns are available, it’ll just count the rows, which in the case of a missing FROM clause, is 1. “select *” is designed to return columns, and therefore barfs if there are none available. Even more odd is this one: select ‘blah’ where exists (select *) You might be surprised at the results… Koushik The engine performs a “Constant scan” for Count(*) where as in the case of “SELECT *” the engine is trying to perform either Index/Cluster/Table scans. amikolaj When you query ‘select * from sometable’, SQL replaces * with the current schema of that table. With out a source for the schema, SQL throws an error. so when you query ‘select count(*)’, you are counting the one row. * is just a constant to SQL here. Check out the execution plan. Like the description states – ‘Scan an internal table of constants.’ You could do ‘select COUNT(‘my name is adam and this is my answer’)’ and get the same answer. Netra Acharya SELECT * Here, * represents all columns from a table. So it always looks for a table (As we know, there should be FROM clause before specifying table name). So, it throws an error whenever this condition is not satisfied. SELECT COUNT(*) Here, COUNT is a Function. So it is not mandetory to provide a table. Check it out this: DECLARE @cnt INT SET @cnt = COUNT(*) SELECT @cnt SET @cnt = COUNT(‘x’) SELECT @cnt Naveen Select 1 / Select ‘*’ will return 1/* as expected. Select Count(1)/Count(*) will return the count of result set of select statement. Count(1)/Count(*) will have one 1/* for each row in the result set of select statement. Select 1 or Select ‘*’ result set will contain only 1 result. so count is 1. Where as “Select *” is a sysntax which expects the table or equauivalent to table (table functions, etc..). It is like compilation error for that query. Ramesh Hi Friends, Count is an aggregate function and it expects the rows (list of records) for a specified single column or whole rows for *. So, when we use ‘select *’ it definitely give and error because ‘*’ is meant to have all the fields but there is not any table and without table it can only raise an error. So, in the case of ‘Select Count(*)’, there will be an error as a record in the count function so you will get the result as ’1'. Try using : Select COUNT(‘RAMESH’) and think there is an error ‘Must specify table to select from.’ in place of ‘RAMESH’ Pinal : If i am wrong then please clarify this. Sachin Nandanwar Any aggregate function expects a constant or a column name as an expression. DO NOT be confused with * in an aggregate function.The aggregate function does not treat it as a column name or a set of column names but a constant value, as * is a key word in SQL. You can replace any value instead of * for the COUNT function.Ex Select COUNT(5) will result as 1. The error resulting from select * is obvious it expects an object where it can extract the result set. I sincerely thank you all for wonderful conversation, I personally enjoyed it and I am sure all of you have the same feeling. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – A Puzzle – Fun with NULL – Fix Error 8117

    - by pinaldave
    During my 8 years of career, I have been involved in many interviews. Quite often, I act as the  interview. If I am the interviewer, I ask many questions – from easy questions to difficult ones. When I am the interviewee, I frequently get an opportunity to ask the interviewer some questions back. Regardless of the my capacity in attending the interview, I always make it a point to ask the interviewer at least one question. What is NULL? It’s always fun to ask this question during interviews, because in every interview, I get a different answer. NULL is often confused with false, absence of value or infinite value. Honestly, NULL is a very interesting subject as it bases its behavior in server settings. There are a few properties of NULL that are universal, but the knowledge about these properties is not known in a universal sense. Let us run this simple puzzle. Run the following T-SQL script: SELECT SUM(data) FROM (SELECT NULL AS data) t It will return the following error: Msg 8117, Level 16, State 1, Line 1 Operand data type NULL is invalid for sum operator. Now the error makes it very clear that NULL is invalid for sum Operator. Frequently enough, I have showed this simple query to many folks whom I came across. I asked them if they could modify the subquery and return the result as NULL. Here is what I expected: Even though this is a very simple looking query, so far I’ve got the correct answer from only 10% of the people to whom I have asked this question. It was common for me to receive this kind of answer – convert the NULL to some data type. However, doing so usually returns the value as 0 or the integer they passed. SELECT SUM(data) FROM (SELECT ISNULL(NULL,0) AS data) t I usually see many people modifying the outer query to get desired NULL result, but that is not allowed in this simple puzzle. This small puzzle made me wonder how many people have a clear understanding about NULL. Well, here is the answer to my simple puzzle. Just CAST NULL AS INT and it will return the final result as NULL: SELECT SUM(data) FROM (SELECT CAST(NULL AS INT) AS data) t Now that you know the answer, don’t you think it was very simple indeed? This blog post is especially dedicated to my friend Madhivanan who has written an excellent blog post about NULL. I am confident that after reading the blog post from Madhivanan, you will have no confusion regarding NULL in the future. Read: NULL, NULL, NULL and nothing but NULL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables

    - by pinaldave
    I love interesting conversations with related to SQL Server. One of my friends Madhivanan always comes up with an interesting point of conversation. Here is one of the conversation between us. I am very confident this blog post will for sure enable you with some new knowledge. Madhi: How do I know if any table has a uniqueidentifier column used in it? Pinal:  I am sure you know that you can do it through some DMV or catalogue views. Madhi: I know that but how can we do that without using DMV or catalogue views? Pinal: Hm… what can I use? Madhi: You can use table name. Pinal: Easy, just say SELECT YourUniqueIdentCol FROM Table. Madhi: Hold on, the question seems to be not clear to you – you do know the name of the column. The matter of the fact, you do not know if the table has uniqueidentifier column. Only information you have is table name. Pinal: Madhi, this seems like you are changing the question when I am close to answer. Madhi: Well, are you clear now? Let me say it again – How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is table name and you are allowed to return any kind of error if table does not have uniqueidentifier column. Pinal: Do you know the answer? Madhi: Yes. I just wanted to test your knowledge about SQL. Pinal: I will have to think. Let me accept I do not know it right away. Can you share the answer please? Madhi: I won! Here it goes! Pinal: When I have friends like you – who needs enemies? Madhi: (laughter which did not stop for a minute). CREATE TABLE t ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, data VARCHAR(60) ) INSERT INTO t (data) SELECT 'test' INSERT INTO t (data) SELECT 'test1' SELECT $rowguid FROM t DROP TABLE t This is indeed very interesting to me. Please note that this is not the optimal way and there will be many other ways to retrieve uniqueidentifier name and value. What I learned from this was if I am in a rush to check if the table has uniqueidentifier and I do not know the name of the same, I can use SELECT TOP (1) $rowguid and quickly know the name of the column. I can later use the same columnname in my query. Madhi did teach me this new trick. Did you know this? What are other ways to get the check uniqueidentifier column existence in a database? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >