Search Results

Search found 27337 results on 1094 pages for 't sql'.

Page 380/1094 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Code formatter for SSMS

    - by blakmk
      I was searching recently for a code formatter for T-Sql and I came accross this nice little utility that I wanted to share: http://www.wangz.net/cgi-bin/pp/gsqlparser/sqlpp/sqlformat.tpl I've been dealing with a lot of legacy code latley and there is nothing I find more infuriating than unformatted code. This tool seems to work quite well. Just one click and it formats everything nicely. There is also a free web version.                                           This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • File Watcher Task

    The task will detect changes to existing files as well as new files, both actions will cause the file to be found when available. A file is available when the task can open it exclusively. This is important for files that take a long time to be written, such as large files, or those that are just written slowly or delivered via a slow network link. It can also be set to look for existing files first (1.2.4.55). The full path of the found file is returned in up to three ways: The ExecValueVariable of the task. This can be set to any String variable. The OutputVariableName when specified. This can be set to any String variable. The FullPath variable within OnFileFoundEvent. This is a File Watcher Task specific event.   Advanced warning of a file having been detected, but not yet available is returned through the OnFileWatcherEvent. This event does not always coincide with the completion of the task, as completion and the OnFileFoundEvent is delayed until the file is ready for use. This event indicates that a file has been detected, and that file will now be monitored until it becomes available. The task will only detect and report on the first file that is created or changes, any subsequent changes will be ignored. Task properties and there usages are documented below: Property Data Type Description Filter String Default filter *.* will watch all files. Standard windows wildcards and patterns can be used to restrict the files monitored. FindExistingFiles Boolean Indicates whether the task should check for any existing files that match the path and filter criteria, before starting the file watcher. IncludeSubdirectories Boolean Indicates whether changes in subdirectories are accepted or ignored. OutputVariableName String The name of the variable into which the full file path found will be written on completion of the task. The variable specified should be of type string. Path String Path to watch for new files or changes to existing files. The path is a directory, not a full filename. For a specific file, enter the file name in the Filter property and the directory in the Path property. PathInputType FileWatcherTask.InputType Three input types are supported for the path: Connection - File connection manager, of type existing folder. Direct Input - Type the path directly into the UI or set on the property as a literal string. Variable – The name of the variable which contains the path. Timeout Integer Time in minutes to wait for a file. If no files are detected within the timeout period the task will fail. The default value of 0 means infinite, and will not expire. TimeoutAsWarning Boolean The default behaviour is to raise an error and fail the task on timeout. This property allows you to suppress the error on timeout, a warning event is raised instead, and the task succeeds. The default value is false.   Installation The task is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the task to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Control Flow Items tab, and then check the File Watcher Task in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The File Watcher Task  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. File Watcher Task for SQL Server 2005 File Watcher Task for SQL Server 2008 File Watcher Task for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.16 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.14 - Fixed user interface bug. A migration problem caused the UI type editors to reference an old SQL 2005 assembly. (17 Nov 2008) Version 2.0.0.7 - SQL Server 2008 release. (20 Oct 2008) SQL Server 2005 Version 1.2.6.100 - Fixed UI bug with TimeoutAsWarning property not saving correctly. Improved expression support in UI. File availability detection changed to use read-only lock, allowing reduced permissions to be used. Corrected installed issue which prevented installation on 64-bit machines with SSIS runtime only components. (18 Mar 2007) Version 1.2.5.73 - Added TimeoutAsWarning property. Gives the ability to suppress the error on timeout, a warning event is raised instead, and the task succeeds. (Task Version 3) (27 Sep 2006) Version 1.2.4.61 - Fixed a bug which could cause a loop condition with an unexpected exception such as incorrect file permissions. (20 Sep 2006) Version 1.2.4.55 - Added FindExistingFiles property. When true the task will check for an existing file before the file watcher itself actually starts. (Task Version 2) (8 Sep 2006) Version 1.2.3.39 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Property type validation improved. (12 Jun 2006) Version 1.2.1.0 - SQL Server 2005 IDW 16 Sept CTP. Futher UI enhancements, including expression indicator. Fixed bug caused by execution within loop Subsequent iterations detected the same file as the first iteration. Added IncludeSubdirectories property. Fixed bug when changes made in subdirectories, and folder change was detected, causing task failure. (Task Version 1) (6 Oct 2005) Version 1.2.0.0 - SQL Server 2005 IDW 15 June CTP. Changes made include an enhanced UI, the PathInputType property for greater flexibility with path input, the OutputVariableName property, and the new OnFileFoundEvent event. (7 Sep 2005) Version 1.1.2 - Public Release (16 Nov 2004) Screenshots   Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005 and SQL Server 2008. If you an error when you try and use the task along the lines of The task with the name "File Watcher Task" and the creation name ... is not registered for use on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This cache is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The task with the name "File Watcher Task" and the creation name "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b" is not registered for use on this computer. Contact Information: File Watcher Task A similar error message can be shown when trying to edit the task if the Microsoft Exception Message Box is not installed. This useful component is installed as part of the SQL Server Management Studio tools but occasionally due to the custom options chosen during SQL Server 2005 setup it may be absent. If you get an error like Could not load file or assembly 'Microsoft.ExceptionMessageBox.. you can manually download and install the missing component. It is available as part of the Feature Pack for SQL Server 2005 release. The feature packs are occasionally updated by Microsoft so you may like to check for a more recent edition, but you can find the Microsoft Exception Message Box download links here - Feature Pack for Microsoft SQL Server 2005 - April 2006 If you encounter this problem on SQL Server 2008, please check that you have installed the SQL Server client components. The component is no longer available as a separate download for SQL Server 2008  as noted in the Microsoft documentation for Deploying an Exception Message Box Application The full error message is shown below for reference, although note that the Version will change between SQL Server 2005 and SQL Server 2008: TITLE: Microsoft Visual Studio ------------------------------ Cannot show the editor for this task. ------------------------------ ADDITIONAL INFORMATION: Could not load file or assembly 'Microsoft.ExceptionMessageBox, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. (Konesans.Dts.Tasks.FileWatcherTask) Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed. Sample Code If you wanted to use the task programmatically then here is some sample code for creating a basic package and configuring the task. It uses a variable to supply the path to watch, and also sets a variable for the OutputVariableName. Once execution is complete it writes out the file found to the console. /// <summary> /// Create a package with an File Watcher Task /// </summary> public void FileWatcherTaskBasic() { // Create the package Package package = new Package(); package.Name = "FileWatcherTaskBasic"; // Add variable for input path, the folder to look in package.Variables.Add("InputPath", false, "User", @"C:\Temp\"); // Add variable for the file found, to be used on OutputVariableName property package.Variables.Add("FileFound", false, "User", "EMPTY"); // Add the Task package.Executables.Add("Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, " + "Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set basic properties taskHost.Properties["PathInputType"].SetValue(taskHost, 1); // InputType.Variable taskHost.Properties["Path"].SetValue(taskHost, "User::InputPath"); taskHost.Properties["OutputVariableName"].SetValue(taskHost, "User::FileFound"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif // Display variable value before execution to check EMPTY Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Execute package package.Execute(); // Display variable value after execution, e.g. C:\Temp\File.txt Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Perform simple check for execution errors if (package.Errors.Count > 0) foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } else Console.WriteLine("Success - {0}", package.Name); // Clean-up package.Dispose(); } (Updated installation and troubleshooting sections, and added sample code July 2009)

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Remote Access to MSSQL Database From 1&1 Hosting [duplicate]

    - by Zerkey
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I just paid ($6 /month) for shared Windows hosting through 1&1 hosting. I was having trouble connecting to my database from home, so I sent an email to support. I received the following response: As we checked your concern here in our end, please be advised that due to limitation of Shared Hosting services, there is no option to connect the database to your SQL Management Studio or through Visual Studio. It is only possible for Dedicated Server package. You may only access the database using MyLittleAdmin at the Control Panel. A dedicated server is like $200 per month! What is the point of having database access only through a web console? I feel I am missing something here, or maybe the support agent is. Is there a way to access my MS SQL database on their servers through Visual Studio or SQL Management Studio from my machine? If not, is there a web host who allows this for less than $200 a month? EDIT: Marked as duplicate... I'm not asking for a list of web hosts, I'm asking how to remotely connect to my MSSQL database through 1&1's services.

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Tale of an Encrypted SSIS Package in msdb and a Lost Password

    - by Argenis
      Yesterday a Developer at work asked for a copy of an SSIS package in Production so he could work on it (please, dear Reader – withhold judgment on Source Control – I know!). I logged on to the SSIS instance, and when I went to export the package… Oops. I didn’t have that password. The DBA who uploaded the package to Production is long gone; my fellow DBA had no idea either - and the Devs returned a cricket sound when queried. So I posed the obligatory question on #SQLHelp and a bunch of folks jumped in – some to help and some to make fun of me (thanks, @SQLSoldier @crummel4 @maryarcia and @sqljoe). I tried their suggestions to no avail…even ran some queries to see if I could figure out how to extract the package XML from the system tables in msdb:   SELECT CAST(CAST(p.packagedata AS varbinary(max)) AS varchar(max)) FROM msdb.dbo.sysssispackages p WHERE p.name = 'LePackage'   This just returned a bunch of XML with encrypted data on it:  I knew there was a job in SQL Agent scheduled to execute the package, and when I tried to look at details on the job step I got the following: Not very helpful. The password had to be saved somewhere, but where?? All of a sudden I remembered that there was a system table I hadn’t queried yet: SELECT sjs.command FROM msdb.dbo.sysjobs sj JOIN msdb.dbo.sysjobsteps sjs ON sj.job_id = sjs.job_id WHERE sj.name = 'Run LePackage' The result: “Well, that’s really secure”, I thought to myself. Cheers, -Argenis

    Read the article

  • Colour coding of the status bar in SQL Server Management Studio - Oh dear

    - by simonsabin
    The new feature in SQL Server 2008 to have your query window status bar colour coded to the server you are on is great. Its a nice way to distinguish production from development servers. Unfortunately it was pointed out to me by a client recently that it doesn't always work. To me that sort of makes it pointless. Its a bit like having breaks that work some of the time. Are you going to place Russian roulette every time you execute the query. Whats more the colour doesn't change if you change the connection. So you can flip between dev and production servers but your status bar stays the colour you set for the dev server. It really annoys me to find features that sort of work. The reason I initially gave up on SQLPrompt was that it didn't work 100% of the time and for that time it didn't work I wasted so much time trying to get it to work I wasted more time than if I didn't have it. (I will say that was 2-3 years ago). If you would like to use this feature but aren't because of these features please vote on these bugs. https://connect.microsoft.com/SQLServer/feedback/details/504418/ssms-make-color-coding-of-query-windows-work-all-the-time https://connect.microsoft.com/SQLServer/feedback/details/361832/update-status-bar-colour-when-changing-connections  

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • Entity Object Based on PL/SQL

    - by Manoj Madhusoodanan
    This blog describes how to create a PL/SQL based Entity Object.Oracle application has number of APIs and each API will perform numerous number of tasks.We can create PL/SQL based EO which will directly invoke the PL/SQL stored procedure from the EO. Here I am demonstrating using a standard API FND_USER_PKG.CREATEUSER.This API has x_user_name and x_owner as mandatory parameter.My task is to create a user through OAF page which will accept User Name and Password. Following steps needs to be performed to achieve the above scenario. 1) Create FndUserEO as follows Include all the API parameters and WHO columns in the EO. Make UserName and EncryptedUserPassword ( Here I am not using Encrypted Password. The column name is same as table column so I am keeping the same) column as mandatory. Generate VO. 2) Edit FndUserEOImpl and add the following 3) Attach FndUserVO to AM 4) Create the UI 5) Deploy following files to middle tier and restart the server.  Entity Object: xxcust.oracle.apps.fnd.user.schema.server.FndUserEO.xml xxcust.oracle.apps.fnd.user.schema.server.FndUserEOImpl.java View Object: xxcust.oracle.apps.fnd.user.server.FndUserVO.xml xxcust.oracle.apps.fnd.user.server.FndUserVOImpl.javaUser Interface: xxcust.oracle.apps.fnd.user.webui.CreateFndUserCO.java xxcust.oracle.apps.fnd.user.webui.CreateFndUserPG.xmlYou can test by giving User Name and Password.

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • New SQL Azure Development Accelerator Core promotional offer announced

    - by Eric Nelson
    This is (almost) a straight copy and paste but represents an important announcement worthy of a little more “exposure” :-) Starting August 1, 2010, we will release a new SQL Azure Development Accelerator Core promotional offer.  This new offer will give you the flexibility to purchase commitment quantities of SQL Azure Business Edition databases independent of other Windows Azure platform services at a deeply discounted monthly price.  The offer is valid only for a six month term.  You may purchase in 10 GB increments the amount of our Business Edition relational database that you require (each Business Edition database is capable of storing up to 50 GB).  The offer price will be $74.95 per 10 GB per month.  This promotional offer represents 25% off of our normal consumption rates.  Monthly Business Edition relational database usage exceeding the purchased commitment amount and usage for other Windows Azure platform services for this offer will be charged at our normal consumption rates.  Please click here for full details of our new SQL Azure Development Accelerator Core offer.  Related Links: Details of 5GB and 50GB databases have been released http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • Today I talk about you

    - by BuckWoody
    Some time back I posted a blog entry (mirrored here and here) asking you how you design databases. Out of those responses, my own experience, studies I read, and interviews I conducted, I collected a wealth of data. Thanks for your responses. So what am I going to do with that information? Well, all along I had planned for that to be used today. I am giving a presentation at an event called “TechReady” called “How Your Customers Design Databases”. This is a Microsoft-internal event, where technical professionals like myself, salespeople, and the product team get together to talk about what has been working, what doesn’t, what is coming and hopefully (fingers crossed here) what the product team can do to help us help the SQL Server community. I’ve mentioned before that I teach database design as part of a course I run at the University of Washington. I’m also planning to give a mini-lecture from that series at TechEd 2010, so if you’re coming stop by. I’d love to meet you. So today I talk about you – thanks for the input. I hope you and I can make a difference in the product. Might take a while, but it’s nice to know your voice is being heard. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • FILESTREAM in SQL Server 2008 R2

    - by CatherineRussell
    Much data is unstructured, such as text documents, images, and videos. This unstructured data is often stored outside the database, separate from its structured data. This separation can cause data management complexities. Or, if the data is associated with structured storage, the file streaming capabilities and performance can be limited. FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data. FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing. FILESTREAM data is not encrypted even when transparent data encryption is enabled. To read more, go to: http://technet.microsoft.com/en-us/library/bb933993.aspx

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >