Search Results

Search found 3314 results on 133 pages for 'certificate authority'.

Page 24/133 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Installing AdventureWorks Sample Database – SQL in Sixty Seconds #010 – Video

    - by pinaldave
    SQL Server has so many enhancements and features that quite often I feel like playing with various features and try out new things. I often come across situation where I want to try something new but I do not have sample data to experiment with. Also just like any sane developer I do not try any of my new experiments on production server. Additionally, when it is about new version of the SQL Server, there are cases when there is no relevant sample data even available on development server. In this kind of scenario sample database can be very much handy. Additionally, in many SQL Books and online blogs and articles there are scripts written by using AdventureWork database. The often receive request that where people can get sample database as well how to restore sample database. In this sixty seconds video we have discussed the same. You can get various resources used in this video from http://bit.ly/adw2012. More on Errors: SQL SERVER – Install Samples Database Adventure Works for SQL Server 2012 SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012 SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification SQLAuthority News – Download Microsoft SQL Server 2012 RTM Now I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • MySQL – Introduction to User Defined Variables

    - by Pinal Dave
    MySQL supports user defined variables to have some data that can be used later part of your query. You can save a value to a variable using a SELECT statement and later you can access its value. Unlike other RDBMSs, you do not need to declare the data type for a variable. The data type is automatically assumed when you assign a value. A value can be assigned to a variable using a SET command as shown below SET @server_type:='MySQL'; When you above command is executed, the value, MySQL is assigned to the variable called @server_type. Now you can use this variable in the later part of the code. Suppose if you want to display the value, you can use SELECT statement. SELECT @server_type; The result is MySQL. Once the value is assigned it remains for the entire session until changed by the later statements. So unlike SQL Server, you do not need to have this as part the execution code every time. (Because in SQL Server, the variables are execution scoped and dropped after the execution). You can give column name as below SELECT @server_type AS server_type; You can also SELECT statement to DECLARE and SELECT the values for a variable. SELECT @message:='Welcome to MySQL' AS MESSAGE; The result is Message -------- Welcome to MySQL You can make use of variables to effectively apply many logics. One of the useful method is to generate the row number as shown in this post MySQL – Generating Row Number for Each Row using Variable. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Contest Winner – What Next on SQL in Sixty Seconds – Poll Result

    - by Pinal Dave
    A few days ago, I have asked a question on this blog. The question was - What would you like to see in the next episodes of SQL in Sixty Seconds. The poll is still active and posted over here: SQL SERVER – Poll – What would you love to see in SQL in Sixty Seconds? The contest was to suggest the next item of SQL in Sixty Seconds and vote for the your choice of subject. There have been plenty of votes to this contest, however, there were only 4 comments to this blog post. Hence, selecting a winner was very simple. Result of Poll It is very clear from result, most of the people would like to watch Performance Tuning subjects. I will continue to build video on this subject in future. Contest Winner Now is the time for the winner of the contest, who left comments on the blog. The winner is Raelyard. Here is the comment which he has left on the blog. raelyard please reach out to me via email and I will send you the gift card. Current Contest Here is the contest which is currently running on this blog. You can take part in the contest and can win a Drone. SQL in Sixty Seconds Here are few of the episodes of SQL in Sixty Seconds, which you can watch. We will have more episodes of SQL in Sixty Seconds from next week which are focused on performance. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Video

    Read the article

  • SQL SERVER – 2011 – SEQUENCE is not IDENTITY

    - by pinaldave
    Yesterday I posted blog post on the subject SQL SERVER – 2011 – Introduction to SEQUENCE – Simple Example of SEQUENCE and I received comment where user was not clear about difference between SEQUENCE and IDENTITY. The reality is that SEQUENCE not like IDENTITY. There is very clear difference between them. Identity is about single column. Sequence is always incrementing and it is not dependent on any table. Here is the quick example of the same. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO -- Run five times SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; SELECT NEXT VALUE FOR Seq AS SeqNumber; GO -- Clean Up DROP SEQUENCE [Seq] GO Here is the resultset. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Microsoft SQL Server 2014 CTP1 Product Guide

    - by Pinal Dave
    Today in User Group meeting there were lots of questions related to SQL Server 2014. There are plenty of people still using SQL Server 2005 but everybody is curious about what is coming in SQL Server 2014.  Microsoft has officially released SQL Server 2014 CTP1 Product Guide. You can easily download the product guide and explore various learning around SQL Server 2014 as well explore the new concepts introduced in this latest version. This SQL Server 2014 CTP1 Product Guide contains few interesting White Papers, a Datasheet and Presentation Deck. Here is the list of the white papers: Mission-Critical Performance and Scale with SQL Server and Windows Server Faster Insights from Any Data Platform for Hybrid Cloud SQL Server In-Memory OLTP Internals Overview for CTP1 SQL Server 2014 CTP1 Frequently Asked Questions for TechEd 2013 North America Here is the list of slide decks: SQL Server 2014 Level 100 Deck SQL Server 2014 Mission Critical Performance LEvel 300 Deck SQL Server 2014 Faster Insights from Any Data Level Level 300 Deck SQL Server 2014 Platform for Hybrid Cloud Level 100 Deck I have earlier downloaded the Product Guide and I have yet not completed reading everything SQL Server 2014 has to offer. If you want to read what are the features which I am going to use in SQL Server 2014, you can read over here. Download Microsoft SQL Server 2014 CTP1 Product Guide Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Download Microsoft SQL Server Compact 4.0 SP1

    - by pinaldave
    Microsoft SQL Server Compact 4.0 is a free, embedded database that software developers can use for building ASP.NET websites and Windows desktop applications. SQL Server Compact 4.0 is the default database for Microsoft WebMatrix. For enhanced development and debugging capabilities, including designer support, Visual Studio can be used to develop ASP.NET web applications and websites using SQL Server Compact 4.0. Enabled to work in the medium or partial trust environments in the web servers, and can be easily deployed along with the website to the third party website hosting service providers. SQL Server CE 4.0 also provides stronger data security with the use of the SHA2 encryption algorithms for encrypting the databases. Latest version also supports T-SQL syntax enhancement by adding support for OFFSET and FETCH that can be used to write paging queries. Used with ADO.NET Entity Framework, SQL Server Compact now supports the columns that have server generated keys like identity, rowguid etc. and the code-first programming model. SQL Server Compact 4.0 is freely redistributable under a redistribution license agreement. SQL Server Compact 3.5 and SQL Server Compact 4.0 can be installed and work side by side on a desktop. Download Microsoft SQL Server Compact 4.0 SP1 Here are my earlier article on SQL Server CE Difference Between SQL Server Compact Edition (CE) and SQL Server Express Edition SQL SERVER – CE – 3 Links to Performance Tuning Compact Edition SQL SERVER – CE – List of Information_Schema System Tables SQL SERVER – Server Side Paging in SQL Server CE (Compact Edition) SQL SERVER – CE – Samples Database for SQL CE 4.0 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Find Details for Statistics of Whole Database – DMV – T-SQL Script

    - by pinaldave
    I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. Again, using DMV is the right thing to do moving forward. I quickly wrote following script which gives a lot more information than sp_helpstats. USE AdventureWorks GO SELECT DISTINCT OBJECT_NAME(s.[object_id]) AS TableName, c.name AS ColumnName, s.name AS StatName, s.auto_created, s.user_created, s.no_recompute, s.[object_id], s.stats_id, sc.stats_column_id, sc.column_id, STATS_DATE(s.[object_id], s.stats_id) AS LastUpdated FROM sys.stats s JOIN sys.stats_columns sc ON sc.[object_id] = s.[object_id] AND sc.stats_id = s.stats_id JOIN sys.columns c ON c.[object_id] = sc.[object_id] AND c.column_id = sc.column_id JOIN sys.partitions par ON par.[object_id] = s.[object_id] JOIN sys.objects obj ON par.[object_id] = obj.[object_id] WHERE OBJECTPROPERTY(s.OBJECT_ID,'IsUserTable') = 1 AND (s.auto_created = 1 OR s.user_created = 1); If you have better script to retrieve information about statistics, please share here and I will publish it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • SQL SERVER – Reducing Page Contention on TempDB

    - by pinaldave
    I have recently received following email. “We are using TraceFlag 1118 to reduce the tempDB contention on our servers (2000 and 2005). What is your opinion? We have read lots of material, would you please answer me in single line.” Wow, this was very interesting question. What intrigued me was the second last where I am asked to answer in a single line. There is something about this strong email, I feel like blogging it here. I think I can talk over this subject forever – well, there is no clear answer. There are so many caveats about everything.  Again, I must stay honest to the request about answering in single line. I also do not like to answer which is YES/NO. What should I do? Let me ask this question to community today? What will you answer to this email? Let me start this by answering it myself in one line and taking one side. “I enable this trace flag in SQL Server 2000 without hot patch or service pack and not in later versions (2005+) onwards as code is improved”. What do you do in this case? The best answer will feature in this blog with due credit. Regarding further read and hint here is Microsoft KB which I think is very helpful. In quick summary: (Read KB for accuracy) When any page is allocated first 8 pages are allocated in mixed extended. This trace flag allocates uniform extended at the time, reducing contention. You can enable this trace flag at startup. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL TempDB, TempDB

    Read the article

  • SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28

    - by pinaldave
    I have received many comments, email, suggestions and motivations for my current series of wait types and wait statistics. One of the questions which I keep on receiving almost every other day is whether all of the discussions I have presented so far are also applicable to SQL Server 2000. Additionally, I receive another question asking me if wait statistics matters in SQL Server 2000. If it is, then the asker wants to know how to measure wait types for SQL Server 2000. In SQL Server, you can run the following command to get a list of all the wait types: DBCC SQLPERF(waitstats) The query above will work in SQL Server 2005/2008/R2  because of backup compatibility. As you might have noticed, I have been discussing everything keeping SQL Server 2005+ in mind, but I have given little consideration on SQL Server 2000. However, I am pretty sure that most of the suggestions I have provided are applicable to SQL Server 2000. The wait types I have been discussing mostly exist in SQL Server 2000 as well. But the difference of the 2000 version is that it gets late recent releases, but it is worth it. Wait types are very essential to measure performance bottleneck. Because of this, I do not have to state that I am big fan of them just so I could identify performance bottleneck. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 – Video

    - by pinaldave
    This is 25th video of series SQL in Sixty Seconds we started a few months ago. Even though this is 25th video it seems like we have just started this few days ago. The best part of this SQL in Sixty Seconds is that one can learn something new in less than sixty seconds. There are many concepts which are not new for many but just we all have 60 seconds to refresh our memories. In this video I have touched a very simple question which I receive very frequently on this blog. Q1) How to get current date time? Q2) How to get Only Date from datetime? Q3) How to get Only Time from datetime? I have created a sixty second video on this subject and hopefully this will help many beginners in the SQL Server field. This sixty second video describes the same. Here is a similar script which I have used in the video. SELECT GETDATE() GO -- SQL Server 2000/2005 SELECT CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond, CONVERT(VARCHAR(8),GETDATE(),101) AS DateOnly; GO -- SQL Server 2008 Onwards SELECT CONVERT(TIME,GETDATE()) AS HourMinuteSeconds; SELECT CONVERT(DATE,GETDATE()) AS DateOnly; GO Related Tips in SQL in Sixty Seconds: Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime Get Current System Date Time Get Date Time in Any Format – UDF – User Defined Functions Date and Time Functions – EOMONTH() – A Quick Introduction DATE and TIME in SQL Server 2008 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Image Credit: Movie Gone in 60 Seconds Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Get File Statistics Using fn_virtualfilestats

    - by pinaldave
    Quite often when I am staring at my SSMS I wonder what is going on under the hood in my SQL Server. I often want to know which database is very busy and which database is bit slow because of IO issue. Sometime, I think at the file level as well. I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly. SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name, mf.physical_name, vfs.BytesRead, vfs.BytesWritten, vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS, vfs.NumberReads, vfs.NumberWrites, (Size*8)/1024 Size_MB FROM ::fn_virtualfilestats(NULL,NULL) vfs INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId AND mf.FILE_ID = vfs.FileId GO When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well. I keep this handy but have not shared on blog earlier. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology Tagged: Statistics

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – 2014 CTP1 Available for Download – SQL SERVER 2014 Community Technology Preview 1

    - by Pinal Dave
    Microsoft announced that SQL Server 2014 CTP 1 available to download at TechEd Europe. You can download SQL Server 2014 CTP1 from here. Additionally, there is in depth documentation of the product in the Product Guide over here. In this blog post I have in depth discussed what are the salient features which I was looking forward in the new version. Always On supports now 8 secondaries instead of 4 Online Indexing at partition level – this is a good thing as now index rebuilding can be done at a partition level Statistics at the partition level – this will be a huge improvement in performance In-Memory OLTP works by providing in-application memory storage for the most often used tables in SQL Server. Columnstore Index can be updated – I just can’t wait for this feature (Columnstore Index) Resource Governor can control IO along with CPU and Memory Increase performance by extending SQL Server in-memory buffer pool to SSDs Backup to Azure Storage You can read about the new features of the SQL Server 2014 in the following links: What’s New (Database Engine) What’s New in Analysis Services and Business Intelligence What’s New (Integration Services) What’s New (Replication) What’s New (Reporting Services) Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Service Pack, SQL Tips and Tricks, T SQL, Technology Tagged: CTP

    Read the article

  • SQL SERVER – Removing Leading Zeros From Column in Table – Part 2

    - by pinaldave
    Earlier I wrote a blog post about Remvoing Leading Zeros from Column In Table. It was a great co-incident that my friend Madhivanan (no need of introduction for him) also post a similar article over on BeyondRelational.com. I strongly suggest to read his blog as well as he has suggested some cool solutions to the same problem. On original blog post asked two questions 1) if my sample for testing is correct and 2) If there is any better method to achieve the same. The response was amazing. I am proud on our SQL Community that we all keep on improving on each other’s contribution. There are some really good suggestions as a comment. Let us go over them right now. Improving the ResultSet I had missed including all zeros in my sample set which was an overlook. Here is the new sample which includes all zero values as well. USE tempdb GO -- Create sample table CREATE TABLE Table1 (Col1 VARCHAR(100)) INSERT INTO Table1 (Col1) SELECT '0001' UNION ALL SELECT '000100' UNION ALL SELECT '100100' UNION ALL SELECT '000 0001' UNION ALL SELECT '00.001' UNION ALL SELECT '01.001' UNION ALL SELECT '0000' GO Now let us go over some of the fantastic solutions which we have received. Response from Rainmaker SELECT CASE PATINDEX('%[^0 ]%', Col1 + ' ‘') WHEN 0 THEN '' ELSE SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + ' '), LEN(Col1)) END FROM Table1 Response from Harsh Solution 1 SELECT SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + 'a'), LEN(Col1)) FROM Table1 Response from Harsh Solution 2 SELECT RIGHT(Col1, LEN(Col1)+1 -PATINDEX('%[^0 ]%', Col1 + 'a' )) FROM Table1 Response from lucazav SELECT T.Col1 , label = CAST( CAST(REPLACE(T.Col1, ' ', '') AS FLOAT) AS VARCHAR(10)) FROM Table1 AS T Response from iamAkashSingh SELECT REPLACE(LTRIM(REPLACE(col1,'0',' ')),' ','0') FROM table1 Here is the resultset of above scripts. It will remove any leading zero or space and will display the number accordingly. If you believe there is a better solution, please leave a comment. I am just glad to see so many various responses and all of them teach us something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012

    - by pinaldave
    Data Quality Services is very interesting enhancements in SQL Server 2012. My friend and SQL Server Expert Govind Kanshi have written an excellent article on this subject earlier on his blog. Yesterday I stumbled upon his blog one more time and decided to experiment myself with DQS. I have basic understanding of DQS and MDS so I knew I need to start with DQS Client. However, when I tried to find DQS Client I was not able to find it under SQL Server 2012 installation. I quickly realized that I needed to separately install the DQS client. You will find the DQS installer under SQL Server 2012 >> Data Quality Services directory. The pre-requisite of DQS is Master Data Services (MDS) and IIS. If you have not installed IIS, you can follow the simple steps and install IIS in your machine. Once the pre-requisites are installed, click on MDS installer once again and it will install DQS just fine. Be patient with the installer as it can take a bit longer time if your machine is low on configurations. Once the installation is over you will be able to expand SQL Server 2012 >> Data Quality Services directory and you will notice that it will have a new item called Data Quality Client.  Click on it and it will open the client. Well, in future blog post we will go over more details about DQS and detailed practical examples. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Data Quality Services

    Read the article

  • SQL SERVER – Winners – Contest Win Joes 2 Pros Combo (USD 198)

    - by pinaldave
    Earlier this week we had contest ran over the blog where we are giving away USD 198 worth books of Joes 2 Pros. We had over 500+ responses during the five days of the contest. After removing duplicate and incorrect responses we had a total of 416 valid responses combined total 5 days. We got maximum correct answer on day 2 and minimum correct answer on day 5. Well, enough of the statistics. Let us go over the winners’ names. The winners have been selected randomly by one of the book editors of Joes 2 Pros. SQL Server Joes 2 Pros Learning Kit 5 Books Day 1 Winner USA: Philip Dacosta India: Sandeep Mittal Day 2 Winner USA: Michael Evans India: Satyanarayana Raju Pakalapati Day 3 Winner USA: Ratna Pulapaka India: Sandip Pani Day 4 Winner USA: Ramlal Raghavan India: Dattatrey Sindol Day 5 Winner USA: David Hall India: Mohit Garg I congratulate all the winners for their participation. All of you will receive emails from us. You will have to reply the email with your physical address. Once you receive an email please reply within 3 days so we can ship the 5 book kits to you immediately. Bonus Winners Additionally, I had announced that every day I will select a winner from the readers who have left comments with their favorite blog post. Here are the winners with their favorite blog post. Day 1: Prasanna kumar.D [Favorite Post] Day 2: Ganesh narim [Favorite Post] Day 3: Sreelekha [Favorite Post] Day 4: P.Anish Shenoy [Favorite Post] Day 5: Rikhil [Favorite Post] All the bonus winners will receive my print book SQL Wait Stats if your shipping address is in India or Pluralsight Subscription if you are outside India. If you are not winner of the contest but still want to learn SQL Server you can get the book from here. Amazon | 1 | 2 | 3 | 4 | 5 | Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Yesterday my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE. I personally enjoyed reading the content on this subject. While I was reading the blog post, I thought of very simple new puzzle. Let us see if we can try to solve it and learn a bit more about Sequence. Here is the script, which I executed. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 3 INCREMENT BY 1 MINVALUE 1 MAXVALUE 5 CYCLE NO CACHE; GO -- Following will return 3 SELECT next value FOR dbo.SequenceID; -- Following will return 4 SELECT next value FOR dbo.SequenceID; -- Following will return 5 SELECT next value FOR dbo.SequenceID; -- Following will return which number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Above script gave me following resultset. 3 is the starting value and 5 is the maximum value. Once Sequence reaches to maximum value what happens? and WHY? Bonus question: If you use UNION between 2 SELECT statement which uses UNION, it also throws an error. What is the reason behind it? Can you attempt to answer this question without running this code in SQL Server 2012. I am very confident that irrespective of SQL Server version you are running you will have great learning. I will follow up of the answer in comments below. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – DATEDIFF – Accuracy of Various Dateparts

    - by pinaldave
    I recently received the following question through email and I found it very interesting so I want to share it with you. “Hi Pinal, In SQL statement below the time difference between two given dates is 3 sec, but when checked in terms of Min it says 1 Min (whereas the actual min is 0.05Min) SELECT DATEDIFF(MI,'2011-10-14 02:18:58' , '2011-10-14 02:19:01') AS MIN_DIFF Is this is a BUG in SQL Server ?” Answer is NO. It is not a bug; it is a feature that works like that. Let us understand that in a bit more detail. When you instruct SQL Server to find the time difference in minutes, it just looks at the minute section only and completely ignores hour, second, millisecond, etc. So in terms of difference in minutes, it is indeed 1. The following will also clear how DATEDIFF works: SELECT DATEDIFF(YEAR,'2011-12-31 23:59:59' , '2012-01-01 00:00:00') AS YEAR_DIFF The difference between the above dates is just 1 second, but in terms of year difference it shows 1. If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it to minutes. SELECT DATEDIFF(second,'2011-10-14 02:18:58' , '2011-10-14 02:19:01')/60.0 AS MIN_DIFF Even though the concept is very simple it is always a good idea to refresh it. Please share your related experience with me through your comments. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – SQL Server Performance: Indexing Basics – SQL in Sixty Seconds #006 – Video

    - by pinaldave
    A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analyzing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behavior and designing our applications appropriately will make sure the application is performed to its highest potential. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. More on Indexes: SQL Index SQL Performance I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Here is the interview of Vinod Kumar by myself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • SQL SERVER – Determine if SSRS 2012 is Installed on your SQL Server

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. Determine if SSRS 2012 is Installed on your SQL Server You may already have SSRS, or you may need to install it. Before doing any installation it makes sense to know where you are now. If you happened to install SQL Server with all features, you have the tools you need. There are two tools you need: SQL Server Data Tools and Reporting Services installed in Native Mode. To find out if SQL Server Data Tools (SSDT) is installed, click the Start button, go to All Programs, and expand SQL Server 2012. Look for SQL Server Data Tools   Now, let’s check to see if SQL Server Reporting Services is installed. Click the Start > All Programs > SQL Server 2012 > Configuration Tools > SQL > Server Configuration Manager   Once Configuration Manager is running, select SQL Server Services. Look for SQL Server Reporting Services in the list of services installed. If you have both SQL Server Reporting Services service and SQL Server Developer tools installed, you will not have to install them again. You may have SQL Server installed, but are missing the Data Tools or the SSRS service or both. In tomorrow blog post we will go over how to install based on where you are now.   Tomorrow’s Post Tomorrow’s blog post will show how to install and configure SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >