Search Results

Search found 27658 results on 1107 pages for 'sql dba'.

Page 46/1107 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Parsing the sqlserver.sql_text Action in Extended Events by Offsets

    - by Jonathan Kehayias
    A couple of weeks back I received an email from a member of the community who was reading the XEvent a Day blog series and had a couple of interesting questions about Extended Events.  This person had created an Event Session that captured the sqlserver.sql_statement_completed and sqlserver.sql_statement_starting Events and wanted to know how to do a correlation between the related Events so that the offset information from the starting Event could be used to find the statement of the completed...(read more)

    Read the article

  • An XEvent a Day (30 of 31) – Tracking Session and Statement Level Waits

    - by Jonathan Kehayias
    While attending PASS Summit this year, I got the opportunity to hang out with Brent Ozar ( Blog | Twitter ) one afternoon while he did some work for Yanni Robel ( Blog | Twitter ).  After looking at the wait stats information, Brent pointed out some potential problem points, and based on that information I pulled up my code for my PASS session the next day on Wait Statistics and Extended Events and made some changes to one of the demo’s so that the Event Session only focused on those potentially...(read more)

    Read the article

  • Parsing the sqlserver.sql_text Action in Extended Events by Offsets

    - by Jonathan Kehayias
    A couple of weeks back I received an email from a member of the community who was reading the XEvent a Day blog series and had a couple of interesting questions about Extended Events.  This person had created an Event Session that captured the sqlserver.sql_statement_completed and sqlserver.sql_statement_starting Events and wanted to know how to do a correlation between the related Events so that the offset information from the starting Event could be used to find the statement of the completed...(read more)

    Read the article

  • An XEvent a Day (24 of 31) – What is the package0.callstack Action?

    - by Jonathan Kehayias
    One of the actions inside of Extended Events is the package0.callstack and the only description provided by sys.dm_xe_objects for the object is 16-frame call stack. If you look back at The system_health Session blog post, you’ll notice that the package0.callstack Action has been added to a number of the Events that the PSS team thought were of significance to include in the Event Session. We can trigger an event that will by logged by our system_health Event Session by raising an error of severity...(read more)

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • Here Comes the FY11 Earmarks Database

    - by Mike C
    I'm really interested in politics (don't worry, I'm not going to start bashing politicians and hammering you with political rage). The point is when the U.S. FY11 Omnibus Spending Bill (the bill to fund the U.S. Government for another year) was announced it piqued my interest. I'm fascinated by " earmarks " (also affectionally known as " pork "). For those who aren't familiar with U.S. politics, "earmark" is a slang term for "Congressionally Directed Spending". It's basically the set of provisions...(read more)

    Read the article

  • SQL Server v.Next (Denali) : Why you should start testing early

    - by AaronBertrand
    Denali is coming, whether you like it or not. You may not be an early adopter and you may not have plans on your current calendar, but at some point you will need to move your apps and databases to this release - or one very much like it. There are a lot of great new features you will be able to take advantage of, but not everything is a double rainbow. There are some changes that will break your spirit if you let them. What does it mean? I go over several breaking changes in my presentation that...(read more)

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • CONTAINSTABLE with wildcard works different in SQL Server 2005 and SQL Server 2008?

    - by musuk
    I have two same databases one on SQL Server 2005 and one on SQL Server 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQL Server 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQL Server 2005 works perfectly and finds necessary record. SQL Server 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • Two Instances of Sql Server (2005 and 2008)

    - by Felipe
    Hi All, I installed Visual Studio 2008 Professional in my machine and It had installed SQL Server Express 2005 database in machine, and I use it very fine! I installed SQL Managment Studio and works great. So, in this week I Installed Visual Studio 2010 Pro in machine and the setup installed the SQL Server express 2008 and it overwrite the instance of my SQL Server Express 2005. All right, Now, I'd like to know how can I have two instances of the SQL Server Express in my Machine, Express 2005 and Express 2008. I can not access the 2005 , only 2008 :( and my projects uses 2005.. Somebody Help me! thanks Bye

    Read the article

  • Cannot open SQL 2005 database in SQL server Management Studio 2008 R2 on Windows 7

    - by Darryl Lawrence
    I have Windows 7 64bit as my OS + SQL Server 2008 R2 installed. I can connect to SQL Databases (2008), but cannot connect to a SQL 2005 database. I can, however, connect to the 2005 SQL Database on a PC that has Windows XP as the OS and also has SQL Server 2008 R2 installed. So it seems that it works fine on XP but not on Windows 7 (32 or 64bit). is this an Operating System issue? Error message: Cannot connect to OMRSQLV016\PRODSQL002. =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 Error Number: -1 Severity: 20 State: 0

    Read the article

  • Which SQL query is faster? Filter on Join criteria or Where clause?

    - by Jon Erickson
    Compare these 2 queries. Is it faster to put the filter on the join criteria or in the were clause. I have always felt that it is faster on the join criteria because it reduces the result set at the soonest possible moment, but I don't know for sure. I'm going to build some tests to see, but I also wanted to get opinions on which would is clearer to read as well. Query 1 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID INNER JOIN TableB b ON x.TableBID = b.ID WHERE a.ID = 1 /* <-- Filter here? */ Query 2 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID AND a.ID = 1 /* <-- Or filter here? */ INNER JOIN TableB b ON x.TableBID = b.ID

    Read the article

  • sql server 2005 command line install error ADD_LOCAL property already installed

    - by Belliez
    I have a silent installation of SQL Server 2005 that works great when installing SQL Server on a machine that does not have it already installed. I use the following parameters when I perform the installation: #define SQL_SILENT "/passive /qb" #define SQL_USERNAME "username=MyUserName" #define SQL_COMPANYNAME "companyname=MyCompanyName" #define SQL_ADDLOCAL "ADDLOCAL=SQL_Engine" #define SQL_UPGRADE "" #define SQL_DISABLENETWORKPROTOCOLS "disablenetworkprotocols=0" #define SQL_INSTANCENAME "instancename=MYSQLINSTANCE" #define SQL_SQLAUTOSTART "SQLAUTOSTART=1" #define SQL_SECURITYMODE "SECURITYMODE=SQL" #define SQL_SAPWD "SAPWD=StrongPassword" #define SQL_SQLACCOUNT "SQLACCOUNT=""""" #define SQL_SQLPASSWORD "SQLPASSWORD=""""" It installs the instance of SQL Server Express without a problem. However, when I attempt to install SQL Server on a machine that already has another instance with components I get the following error: *"A component that you have specified in the ADD_LOCAL property is already installed. To upgrade the existing component, refer to the template.ini and set the UPGRADE property to the name of the component."* I have also tried using the UPGRADE method as per the error message #define SQL_UPGRADE "UPGRADE=SQL_Engine INSTANCENAME=MYSQLINSTANCE" but get the following error: "SQL Server Setup cannot perform the upgrade because the component is not installed on the computer. To proceed, verify the component to be upgraded in currently installed, and that the component to be upgraded is specified in the ADDLOCAL property." Does anyone have any suggestions? Thank you

    Read the article

  • Need your feedback on our new SQL Server Connectivity portal

    - by The Official Microsoft IIS Site
    SQL Server, as a database product, has grown over the years and there are multiple ways to connect to it. Often, the different ways to connect to the database get documented and discussed in the various technology sections, and the technology choice determines which connectivity method one is going to use. For example, if one is writing a C++ application then one has to go with ODBC whereas a PHP web site developer will choose the PHP driver of course. Until now, this information was scattered all...(read more)

    Read the article

  • How to create computed column in SQL Server 2008 R2 [closed]

    - by Smartboy
    I have a SQL Server 2008 R2 database. This database has two tables called Pictures and PictureUse. Picture table has the following columns: Id (int) PictureName (nvarchar(max)) CreateDate (datetime ) PictureUse table has the following columns : Id (int) Pictureid (int) CreateDate (datetime ) How to create a computed column in the Picture table which tells me that how many times this picture has been clicked?

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Add the current admin user to SQL Server Express 2008.

    - by BradyKelly
    I have managed (in 'eksperiments') to remove both my Windows users from my SQL Express instance. I am now logged in as windows admin, and have re-created the corresponding SQL login, but I cannot assign sysadmin rights. I get the error: User does not have permission to perform this action. (Microsoft SQL Server, Error: 15247) If admin can't do this, should I start looking for a small animal to sacrifice?

    Read the article

  • How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it?

    - by The Good Boy
    I have configured the instance of SQL Server to allow dedicated users to access it remotely. Connection string Data Source = 192.168.1.2,1433\sqlexpress;etc... has been tested and works. However, I have not configured the SSL to secure the communication. How to configure SSL on an instance of SQL Server to allow dedicated users to remotely access it? edit 1 The dedicated user will administer its database using Sql Server Management Studio. What I want to do is to secure the communication when he/she administers the database using Sql Server Management Studio.

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Download SQL Server 2008 R2 Express (Database Size Limit Increased to 10GB! )

    - by Aamir Hasan
    Yesterday i was researching about SQL Server 2008. i found New release of MS SQL Server 2008 R2, which have many new BI features and enhancements. There is a tiny cute feature that I am sure all of us will appreciate a lot. The product team has increased the Database Size limit for SQL Server 2008 R2 Express from 4 GB to 10 GB. So if you have got a growing SQL Server Express database that is close to the 4 GB Limit, hurry, upgrade to R2 Express. See the announcement from Product Team. SQL Server 2008 R2 Express download. SQL Server 2008 R2 Express Download

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • SQl server 2008 permission and encryption

    - by Paranjai
    i have made columns in some of the tables encrypted in sql server 2008. Now as i am a db owner i have the access to encode and decode the data using the symmetric key and certificate. But some other users have only currently datareader and datawriter rights ,and when they execute any SP referring the logic which uses the key and certificate "User does has not right on the certificate to execute". What rights / exact permission should i grant them just to solve this problem

    Read the article

  • Powershell Transcript is empty when running script from SQL Agent Job in 2005 SQL Server

    - by Greg Bray
    I have a complex Powershell script that gets run as part of a SQL 2005 Server Agent Job. The script works fine, but it uses the "Start-Transcript $strLogfile -Append" command to log all of it's actions to a transcript file. The problem is that the transcript is always empty. It adds the header and footer to indicate that the transcript is starting and stopping, but it doesn't actually log anything. Example: ********************** Windows PowerShell Transcript Start Start time: 20100304173001 Username : xxxxxxxxxxxx\SYSTEM Machine : xxxxx-xxx (Microsoft Windows NT 5.2.3790 Service Pack 2) ********************** ********************** Windows PowerShell Transcript End End time: 20100304173118 ********************** When I execute the script from a command prompt or start - run everything works just fine. Here is the command used to run the script (same command used in the Operating system CmdExec step of the SQL Agent Job) powershell.exe -File "c:\temp\Backup\backup script.ps1" I first thought it must have something to do with the script running under the System account (default SQL Agent account), but even when I tried changing the SQL Agent to run under my own personal account it still created a blank transcript. Is there any way to get PowerShell Transcripts to work when executing them as part of a 2005 SQL Server Agent Job?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >