Search Results

Search found 1024 results on 41 pages for 'dba'.

Page 7/41 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Recap - SQL Saturday 151 in Orlando

    - by KKline
    It's always a feel-good experience for me to return to SQL Saturday in Orlando, the place where SQL Saturdays were started by Andy Warren ( Twitter | Blog ). On this trip, I delivered a full-day, pre-conference seminar on Troubleshooting and Performance Tuning SQL Server. I also delivered a session on SQL Server Internals and Architecture to a totally packed house. For those of you who emailed me directly, here's the link for the special SQL Sentry offer . I got to attend the extended events session...(read more)

    Read the article

  • Delete Job by Name

    - by Derek D.
    When scripting out jobs using ssms (sql server management studio) the default script for a drop statement is to drop the job according to it’s job_id. This is not beneficial however when pushing code to different environments. Job_id’s are specific to the windows environment in which they are created. To get around [...]

    Read the article

  • using sp_addrolemember

    - by Derek Dieter
    To add a user or group to a role, you need to use sp_addrolemember. This procedure is easy to use as it only accepts two parameters, the role name, and the username (or group). Roles are utilized in order to provide a layer of abstraction from permissions from being applied directly to users. While [...]

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook

    - by ssqa.net
    Its one year on my first book released, keeping aside the financial gains from this book I'm more happy to achieve one of the important goals from my career. This is something big in my life to announce, it gives immensive pleasure and happiness to share about my first book (hard paper) and eBook release, titled : Microsoft SQL Server 2008 R2 Administration Cookbook is released and out now. share my experience and task based real-world best practices in a cookbook style. My thanks to the technical...(read more)

    Read the article

  • Today I talk about you

    - by BuckWoody
    Some time back I posted a blog entry (mirrored here and here) asking you how you design databases. Out of those responses, my own experience, studies I read, and interviews I conducted, I collected a wealth of data. Thanks for your responses. So what am I going to do with that information? Well, all along I had planned for that to be used today. I am giving a presentation at an event called “TechReady” called “How Your Customers Design Databases”. This is a Microsoft-internal event, where technical professionals like myself, salespeople, and the product team get together to talk about what has been working, what doesn’t, what is coming and hopefully (fingers crossed here) what the product team can do to help us help the SQL Server community. I’ve mentioned before that I teach database design as part of a course I run at the University of Washington. I’m also planning to give a mini-lecture from that series at TechEd 2010, so if you’re coming stop by. I’d love to meet you. So today I talk about you – thanks for the input. I hope you and I can make a difference in the product. Might take a while, but it’s nice to know your voice is being heard. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Use Those Schemas, People!

    - by BuckWoody
    Database Schemas are just containers – they aren’t users or anything else – think of a sub-directory on the hard drive. In early versions of SQL Server we “hid” schemas, placing all objects under “dbo”, which gave the erroneous perception that Schemas are users. In SQL Server 2005, we “un-hid” or re-introduced schemas within the database. Users can have a default schema (a place where their new objects go), you can add new schemas and transfer objects between them, and they have many other benefits. But I still see a lot of applications, developed by shops I know as well as vendors, that don’t make use of a Schema. Everything is piled under dbo. I completely understand this – since permissions can be granted to a schema, they feel a lot like a user, so it’s just easier not to worry about both users and schemas when you create a database. But if you’ll use them properly you can make your application more understandable and portable. You should at least take a few minutes and read more about them – you owe it to your users: http://msdn.microsoft.com/en-us/library/ms190387.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • I'm blogging again, and about time too

    - by fatherjack
    No, seriously, this one is about time. I recently had an issue in a work database where a query was giving random results, sometimes the query would return a row and other times it wouldn't. There was quite a bit of work distilling the query down to find the reason for this and I'll try to explain by demonstrating what was happening by using some sample data in a table with rather a contrived use case. Let's assume we have a table that is designed to have a start and end date for something, maybe...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Database IDs

    - by fatherjack
    Just a quick post, mainly to test out the new blog format but related to a question on the #sqlhelp hashtag. The question came from Justin Dearing (@zippy1981) as: So I take it database_id isn’t an ever incrementing value. #sqlhelp When a new database is created it is given the lowest available ID. This either is in a gap in IDs where a database has been dropped or the database ID is incremented by one from the highest current ID if there are no gaps to fill. To see this in action, connect to your sandbox server and try this: USE MASTER GO CREATE DATABASE cherry GO USE cherry GO SELECT DB_ID() GO CREATE DATABASE grape GO USE grape GO SELECT DB_ID() GO CREATE DATABASE melon GO USE melon GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE grape GO CREATE DATABASE kiwi GO USE kiwi GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE cherry DROP DATABASE melon DROP DATABASE kiwi You should get an incrementing series of database IDs as the databases are created until the last one where the new database gets allocated the ID that is missing because one was dropped.

    Read the article

  • Starting this week: Dublin, Maidenhead, and London

    - by KKline
    This might be most most overcommitted four-week period of time ever in my life. I’m tired just thinking about it! Not only am I traveling internationally and speaking over the next few weeks, I’m also helping on two book projects, learning some new applications from Quest Software, and helping on a small Transact-SQL refactoring project. Swag on hand? I’ve got a special printing of 500 video training DVDs for this trip: SQL Server Training on DMVs Performance Monitor and Wait Events Plus, I’ll have...(read more)

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook - Book and eBook expected June 2011. Pre-order now!

    - by ssqa.net
    Over 85 practical recipes for administering a high-performance SQL Server 2008 R2 system. Book and eBook expected June 2011 . Pre-order now! Multi-format orders get free access on PacktLib , This practical cookbook will show you the advanced administration techniques for managing and administering a scalable and high-performance SQL Server 2008 R2 system. It contains over 85 practical, task-based, and immediately useable recipes covering a wide range of advanced administration techniques for administering...(read more)

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How Can I Effectively Interview an Oracle Candidate?

    - by Tim Medora
    First, I browsed through SO for matching questions and didn't find one, but please point me in the right direction if this exact question has already been asked. I work with and around programmers of various skill levels on various platforms. I would consider my skills to be strong in terms of relational database design, query development, and basic performance tuning and administration. I'm mid-level when it comes to database theory. My team is looking to me to ensure that we have the best talent on staff, in this case, an engineer experienced in Oracle administration. To me, a well-rounded database administrator, regardless of platform, should also be competent in developing against the database so that is also a requirement. However my database skills are centralized around SQL Server 200x with experience in a few other products like SAP MaxDB, Access, and FoxPro. How can I thoroughly assess the skills of an Oracle engineer? I can ask high-level database theory questions and talk about routine tasks that are common across platforms, but I want to dig deep enough that I can be confident in the people I hire. Normally, I would alternate very specific questions that have a right/wrong answer with architectural questions that might have several valid answers. Does anyone have an interview template, specific questions, or any other knowledge that they can share? Even knowing the meaningful Oracle-related certifications would be a help. Thank you. EDIT: All the answers have been very helpful so far and I have given upvotes to everyone. I'm surprised that there are already 3 close votes on this question as "off topic". To be clear, I am specifically asking how a MS SQL Server engineer (like myself) can effectively interview a person with different but symbiotic skills. The question has already received specific, technical answers which have improved my own database design and programming skills. If this is more appropriate as a community wiki, please convert it.

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Alter Index All Tables

    - by Derek Dieter
    This script comes in handy when needing to alter all indexes in a database and rebuild them. This will only work on SQL Server 2005+. It utilizes the ALL keyword in the Alter index statement to rebuild all the indexes for a particular table. This script retrieves all base tables and stores [...]

    Read the article

  • Wait Statistics in Microsoft SQL Server

    - by KKline
    When it comes to troubleshooting in relational databases, there's no better place to start than wait statistics. In a nutshell, a wait statistic is an internal counter that tells you how long the database spent waiting for a particular resource, activity, or process. Since wait statistics are categorized by type, one look will quickly tell the variety of problem that needs your attention, assuming you know meaning for Microsoft's lingo for each wait type....(read more)

    Read the article

  • Jobs - are your SQL Agent jobs talking to you enough?

    - by fatherjack
    Most DBAs will have at least a couple of servers that have SQL Agent jobs that are scheduled to do various things on a regular basis. There is a whole host of supporting configuration settings for these jobs but some of the most important are notifications. Notification settings are there to keep you up to date on how your job executions went. You have options on types of notification - email, pager, net send, or an entry in the SQL Server Event Log and you get options on when each of these channels...(read more)

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >