Search Results

Search found 33210 results on 1329 pages for 'dynamic sql'.

Page 18/1329 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • SQL SERVER – Shard No More – An Innovative Look at Distributed Peer-to-peer SQL Database

    - by pinaldave
    There is no doubt that SQL databases play an important role in modern applications. In an ideal world, a single database can handle hundreds of incoming connections from multiple clients and scale to accommodate the related transactions. However the world is not ideal and databases are often a cause of major headaches when applications need to scale to accommodate more connections, transactions, or both. In order to overcome scaling issues, application developers often resort to administrative acrobatics, also known as database sharding. Sharding helps to improve application performance and throughput by splitting the database into two or more shards. Unfortunately, this practice also requires application developers to code transactional consistency into their applications. Getting transactional consistency across multiple SQL database shards can prove to be very difficult. Sharding requires developers to think about things like rollbacks, constraints, and referential integrity across tables within their applications when these types of concerns are best handled by the database. It also makes other common operations such as joins, searches, and memory management very difficult. In short, the very solution implemented to overcome throughput issues becomes a bottleneck in and of itself. What if database sharding was no longer required to scale your application? Let me explain. For the past several months I have been following and writing about NuoDB, a hot new SQL database technology out of Cambridge, MA. NuoDB is officially out of beta and they have recently released their first release candidate so I decided to dig into the database in a little more detail. Their architecture is very interesting and exciting because it completely eliminates the need to shard a database to achieve higher throughput. Each NuoDB database consists of at least three or more processes that enable a single database to run across multiple hosts. These processes include a Broker, a Transaction Engine and a Storage Manager.  Brokers are responsible for connecting client applications to Transaction Engines and maintain a global view of the network to keep track of the multiple Transaction Engines available at any time. Transaction Engines are in-memory processes that client applications connect to for processing SQL transactions. Storage Managers are responsible for persisting data to disk and serving up records to the Transaction Managers if they don’t exist in memory. The secret to NuoDB’s approach to solving the sharding problem is that it is a truly distributed, peer-to-peer, SQL database. Each of its processes can be deployed across multiple hosts. When client applications need to connect to a Transaction Engine, the Broker will automatically route the request to the most available process. Since multiple Transaction Engines and Storage Managers running across multiple host machines represent a single logical database, you never have to resort to sharding to get the throughput your application requires. NuoDB is a new pioneer in the SQL database world. They are making database scalability simple by eliminating the need for acrobatics such as sharding, and they are also making general administration of the database simpler as well.  Their distributed database appears to you as a user like a single SQL Server database.  With their RC1 release they have also provided a web based administrative console that they call NuoConsole. This tool makes it extremely easy to deploy and manage NuoDB processes across one or multiple hosts with the click of a mouse button. See for yourself by downloading NuoDB here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment

    - by pinaldave
    Filestream is a very interesting feature, and an enhancement of FileTable with Filestream is equally exciting. Today in this post, we will learn how to set up the FileTable Environment in SQL Server. The major advantage of FileTable is it has Windows API compatibility for file data stored within an SQL Server database. In simpler words, FileTables remove a barrier so that SQL Server can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. This way, you can use SQL Server to access the data using T-SQL enhancements, and Windows can access the file using its applications. So for the first step, you will need to enable the Filestream feature at the database level in order to use the FileTable. -- Enable Filestream EXEC sp_configure filestream_access_level, 2 RECONFIGURE GO -- Create Database CREATE DATABASE FileTableDB ON PRIMARY (Name = FileTableDB, FILENAME = 'D:\FileTable\FTDB.mdf'), FILEGROUP FTFG CONTAINS FILESTREAM (NAME = FileTableFS, FILENAME='D:\FileTable\FS') LOG ON (Name = FileTableDBLog, FILENAME = 'D:\FileTable\FTDBLog.ldf') WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTableDB'); GO Now, you can run the following code and figure out if FileStream options are enabled at the database level. -- Check the Filestream Options SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc FROM sys.database_filestream_options; GO You can see the resultset of the above query which returns resultset as the following image shows. As you can see , the file level access is set to 2 (filestream enabled). Now let us create the filetable in the newly created database. -- Create FileTable Table USE FileTableDB GO CREATE TABLE FileTableTb AS FileTable WITH (FileTable_Directory = 'FileTableTb_Dir'); GO Now you can select data using a regular select table. SELECT * FROM FileTableTb GO It will return all the important columns which are related to the file. It will provide details like filesize, archived, file types etc. You can also see the FileTable in SQL Server Management Studio. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables Here, you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created FileTableTb. You can right click on the newly created table and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When you click on the option, it will open up the following folder in my local machine where the FileTable data will be stored: \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir In tomorrow’s blog post as Part 2, we will go over two methods of inserting the data into this FileTable. Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • SQLAuthority News – Updates on Contests, Books and SQL Server

    - by pinaldave
    There are lots of things happening on this blog and I feel sometime it is difficult to keep up. One of the suggestion I keep on receiving if there is a single page where one can visit and know the updates. I did consider of the same at some point but in era of RSS Feed it is difficult to have proper audience to that page. Here are few updates on various contest and books give away in recent time. Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend – I have received so many entries for this contest. Many have sent me email asking if this contest can be extended by couple of days. For the same the deadline for this contest is now Nov 10th 7 AM. You can send your entries by that time. The prize is 2 combo set of Joes 2 Pros is of USD 444. If you have not take part in the contest please take part now. Guess What is in the box? – There were many entry for this contest. We played this contest on blog as well, facebook. The answer of this contest was announced in 2 days in blog post announcing my new book. The winner was Manas Dash from Bangalore. He answered “The box will contain SQL book authored by Vinod and Pinal”. This was the closest answer we received. Win 5 SQL Programming Book Contest will have winner announced by Nov 15th and winners will be sent email. Win 5 SQL Wait Stats Book Contest is closed and winners have been sent their award. My third book SQL Server Interview Questions and Answers have run out of stock in India in 36 hours of its launch. We are working very hard to make it available again. Thank you again for excellent support! Without your participation all the give away have no significance. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune

    - by pinaldave
    My recent SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. I had recently completed another seminar in Hyderabad which was again blazing success. We had 25 attendees to the seminar and had wonderful time together. There is one thing very different between usual class room training and this seminar series. In this seminar series we go 100% demo oriented and real world scenario deep down. We do not talk usual theory talk-talk. The goal of this seminar to give anybody who attends a jump start and deep dive on the performance tuning subject. I will share many different examples and scenarios from my years of experience of performance tuning. The beginning of the second day is always interesting as I take attendees the server as example of the talk, and together we will attempt to identify the bottleneck and see if we can resolve the same. So far I have got excellent feedback on this unique session, where we pick database of the attendees and address the issues. I plan to do the same again in next sessions. The next Seminar is in Pune.I am very excited for the same. Date and Time: December 4-5, 2010. 10 AM to 6 PM The Pride Hotel 05, University Road, Shivaji Nagar, Pune – 411 005 Tel: 020 255 34567 Click here for the agenda of the seminar. Instead of writing more details, I will let the photos do the talk for latest Hyderabad Seminar. Hotel Amrutha Castle King Arthur's Court Pinal Presenting Seminar Pinal Presenting Seminar Seminar Attendees Pinal Presenting Seminar Group Photo of Hyderabad Seminar Attendees Seminar Support Staff - Nupur and Shaivi Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • SQL SERVER – Get Directory Structure using Extended Stored Procedure xp_dirtree

    - by pinaldave
    Many years ago I wrote article SQL SERVER – Get a List of Fixed Hard Drive and Free Space on Server where I demonstrated using undocumented Stored Procedure to find the drive letter in local system and available free space. I received question in email from reader asking if there any way he can list directory structure within the T-SQL. When I inquired more he suggested that he needs this because he wanted set up backup of the data in certain structure. Well, there is one undocumented stored procedure exists which can do the same. However, please be vary to use any undocumented procedures. xp_dirtree 'C:\Windows' Execution of the above stored procedure will give following result. If you prefer you can insert the data in the temptable and use the same for further use. Here is the quick script which will insert the data into the temptable and retrieve from the same. CREATE TABLE #TempTable (Subdirectory VARCHAR(512), Depth INT); INSERT INTO #TempTable (Subdirectory, Depth) EXEC xp_dirtree 'C:\Windows' SELECT Subdirectory, Depth FROM #TempTable; DROP TABLE #TempTable; Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Relationship with Parallelism with Locks and Query Wait – Question for You

    - by Pinal Dave
    Today, I have one very simple question based on following image. A full disclaimer is that I have no idea why it is like that. I tried to reach out to few of my friends who know a lot about SQL Server but no one has any answer. Here is the question: If you go to server properties and click on Advanced you will see the following screen. Under the Parallelism section if you noticed there are four options: Cost Threshold for Parallelism Locks Max Degree of Parallelism Query Wait I can clearly understand why Cost Threshold for Parallelism and Max Degree of Parallelism belongs to Parallelism but I am not sure why we have two other options Locks and Query Wait belongs to Parallelism section. I can see that the options are ordered alphabetically but I do not understand the reason for locks and query wait to list under Parallelism. Here is the question for you – Why Locks and Query Wait options are listed under Parallelism section in SQL Server Advanced Properties? Please leave a comment with your explanation. I will publish valid answers on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL – Difference Between INNER JOIN and JOIN

    - by Pinal Dave
    Here is the follow up question to my earlier question SQL – Difference between != and Operator <> used for NOT EQUAL TO Operation. There was a pretty good discussion about this subject earlier and lots of people participated with their opinion. Though the answer was very simple but the conversation was indeed delightful and was indeed very informative. In this blog post I have another following up question to all of you. What is the difference between INNER JOIN and JOIN? If you are working with database you will find developers use above both the kinds of the joins in their SQL Queries. Here is the quick example of the same. Query using INNER JOIN SELECT * FROM Table1 INNER JOIN  Table2 ON Table1.Col1 = Table2.Col1 Query using JOIN SELECT * FROM Table1 JOIN  Table2 ON Table1.Col1 = Table2.Col1 The question is what is the difference between above two syntax. Here is the answer – They are equal to each other. There is absolutely no difference between them. They are equal in performance as well as implementation. JOIN is actually shorter version of INNER JOIN. Personally I prefer to write INNER JOIN because it is much cleaner to read and it avoids any confusion if there is related to JOIN. For example if users had written INNER JOIN instead of JOIN there would have been no confusion in mind and hence there was no need to have original question. Here is the question back to you - Which one of the following syntax do you use when you are inner joining two tables – INNER JOIN or JOIN? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Finding Size of a Columnstore Index Using DMVs

    - by pinaldave
    Columnstore Index is one of my favorite enhancement in SQL Server 2012. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. Whereas in case of column store indexes multiple pages will contain (multiple) single columns.  Columnstore Indexes are compressed by default and occupies much lesser space than regular row store index by default. One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size.  SELECT OBJECT_SCHEMA_NAME(i.OBJECT_ID) SchemaName, OBJECT_NAME(i.OBJECT_ID ) TableName, i.name IndexName, SUM(s.used_page_count) / 128.0 IndexSizeinMB FROM sys.indexes AS i INNER JOIN sys.dm_db_partition_stats AS S ON i.OBJECT_ID = S.OBJECT_ID AND I.index_id = S.index_id WHERE  i.type_desc = 'NONCLUSTERED COLUMNSTORE' GROUP BY i.OBJECT_ID, i.name Here is my introductory article written on SQL Server Fundamentals of Columnstore Index. Create a sample columnstore index based on the script described in the earlier article. It will give the following results. Please feel free to suggest improvement to script so I can further modify it to accommodate enhancements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ColumnStore Index

    Read the article

  • SQL SERVER 2008 – 2011 – Declare and Assign Variable in Single Statement

    - by pinaldave
    Many of us are tend to overlook simple things even if we are capable of doing complex work. In SQL Server 2008, inline variable assignment is available. This feature exists from last 3 years, but I hardly see its utilization. One of the common arguments was that as the project migrated from the earlier version, the feature disappears. I totally accept this argument and acknowledge it. However, my point is that this new feature should be used in all the new coding – what is your opinion? The code which we used in SQL Server 2005 and the earlier version is as follows: DECLARE @iVariable INT, @vVariable VARCHAR(100), @dDateTime DATETIME SET @iVariable = 1 SET @vVariable = 'myvar' SET @dDateTime = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO The same should be re-written as following: DECLARE @iVariable INT = 1, @vVariable VARCHAR(100) = 'myvar', @dDateTime DATETIME = GETDATE() SELECT @iVariable iVar, @vVariable vVar, @dDateTime dDT GO I have started to use this new method to assign variables as I personally find it very easy to read as well write. Do you still use the earlier method to declare and assign variables? If yes, is there any particular reason or just an old routine? I am interested to hear about this. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Selecting Domain from Email Address

    - by pinaldave
    Recently I came across a quick need where I needed to retrieve domain of the email address. The email address is in the database table. I quickly wrote following script which will extract the domain and will also count how many email addresses are there with the same domain address. SELECT RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) Domain , COUNT(Email) EmailCount FROM   dbo.email WHERE  LEN(Email) > 0 GROUP BY RIGHT(Email, LEN(Email) - CHARINDEX('@', email)) ORDER BY EmailCount DESC Above script will select the domain after @ character. Please note, if there is more than one @ character in the email, this script will not work as that email address is already invalid. Do you have any similar script which can do the same thing efficiently? Please post as a comment. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Convert Seconds to Hour : Minute : Seconds Format

    - by Pinal Dave
    Here is another question I received via email. “Hi Pinal, I have a unique requirement. We measure time spent on any webpage in measure of seconds. I recently have to build a report over it and I did few summations based on group of web pages. Now my manager wants to convert the time, which is in seconds to the format Hour : Minute : Seconds. I researched online and found a solution on stackoverflow for converting seconds to the Minute : Seconds but could not find a solution for Hour : Minute : Seconds. Would you please help?” Of course the logic is very simple. Here is the script for your need. DECLARE @TimeinSecond INT SET @TimeinSecond = 86399 -- Change the seconds SELECT RIGHT('0' + CAST(@TimeinSecond / 3600 AS VARCHAR),2) + ':' + RIGHT('0' + CAST((@TimeinSecond / 60) % 60 AS VARCHAR),2)  + ':' + RIGHT('0' + CAST(@TimeinSecond % 60 AS VARCHAR),2) Here is the screenshot of the resolution: Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Detect Virtual Log Files (VLF) in LDF

    - by pinaldave
    In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases. However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file. DBCC LOGINFO You can find the result of above query to something as displayed in following image. You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority News – Monthly Roundup of Best SQL Posts

    - by pinaldave
    After receiving lots of requests from different readers for long time I have decided to write first monthly round up. If all of you like it I will continue writing the same every month. In fact, I really like the idea as I was able to go back and read all of my posts written in this month. This month was started with answering one of the most common question asked me to about What is Adventureworks? Many of you know the answer but to the surprise more number of the reader did not know the answer. There were few extra blog post which were in the same line as following. SQL SERVER – The Difference between Dual Core vs. Core 2 Duo SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password SQL SERVER – DATE and TIME in SQL Server 2008 DMVs are also one of the most handy tools available in SQL Server, I have written following blog post where I have used DMV in scripts. SQL SERVER – Get Latest SQL Query for Sessions – DMV SQL SERVER – Find Most Expensive Queries Using DMV SQL SERVER – List All the DMV and DMF on Server I was able to write two follow-up of my earlier series where I was finding the size of the indexes using different SQL Scripts. And in fact one of the article Powershell is used as well. This was my very first attempt to use Powershell. SQL SERVER – Size of Index Table for Each Index – Solution 2 SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup Without realizing I wrote series of the blog post on disabled index here is its complete list. I plan to write one more follow-up list on the same. SQL SERVER – Disable Clustered Index and Data Insert SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index SQL SERVER – Disabled Index and Update Statistics Two special post which I found very interesting to write are as following. SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008 SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions In personal adventures, I won the Community Impact Award for Last Year from Microsoft. Please leave your comment about how can I improve this round up or what more details I should include in the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal Run following code in SSMS: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO Above code will give following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Why and what is the solution? Solution is as following: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(7,2)) MoneyInt; GO There were more than 20 valid answers. Here is the reason. Decimal data type is defined as Decimal (Precision, Scale), in other words Decimal (Total digits, Digits after decimal point).. Precision includes Scale. So Decimal (5,2) actually means, we can have 3 digits before decimal and 2 digits after decimal. To accommodate 12345.67 one need higher precision. The correct answer would be DECIMAL (7,2) as it can hold all the seven digits. Here are the list of the experts who have got correct answer and I encourage all of you to read the same over hear. Fbncs Piyush Srivastava Dheeraj Abhishek Anil Gurjar Keval Patel Rajan Patel Himanshu Patel Anurodh Srivastava aasim abdullah Paulo R. Pereira Chintak Chhapia Scott Humphrey Alok Chandra Shahi Imran Mohammed SHIVSHANKER The very first answer was provided by Fbncs and Dheeraj had very interesting comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >