Search Results

Search found 6818 results on 273 pages for 'ulrike schwinn (dba community)'.

Page 25/273 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Third Party Applications and Other Acts of Violence Against Your SQL Server

    - by KKline
    I just got finished reading a great blog post from my buddy, Thomas LaRock ( t | b ), in which he describes a useful personal policy he used to track changes made to his SQL Servers when installing third-party products. Note that I'm talking about line-of-business applications here - your inventory management systems and help desk ticketing apps. I'm not talking about monitoring and tuning applications since they, by their very nature, need a different sort of access to your back-end server resources....(read more)

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Check if Database Exists

    - by Derek Dieter
    In creating a database you also need to check whether or not the database already exists. In order to do so, simply use the ‘if exists’ method and select the name of the database from sysdatabases.IF NOT EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = N'SQLServerPlanet') CREATE DATABASE [SQLServerPlanet]The code below will drop an [...]

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Recap - SQL Saturday 151 in Orlando

    - by KKline
    It's always a feel-good experience for me to return to SQL Saturday in Orlando, the place where SQL Saturdays were started by Andy Warren ( Twitter | Blog ). On this trip, I delivered a full-day, pre-conference seminar on Troubleshooting and Performance Tuning SQL Server. I also delivered a session on SQL Server Internals and Architecture to a totally packed house. For those of you who emailed me directly, here's the link for the special SQL Sentry offer . I got to attend the extended events session...(read more)

    Read the article

  • Delete Job by Name

    - by Derek D.
    When scripting out jobs using ssms (sql server management studio) the default script for a drop statement is to drop the job according to it’s job_id. This is not beneficial however when pushing code to different environments. Job_id’s are specific to the windows environment in which they are created. To get around [...]

    Read the article

  • using sp_addrolemember

    - by Derek Dieter
    To add a user or group to a role, you need to use sp_addrolemember. This procedure is easy to use as it only accepts two parameters, the role name, and the username (or group). Roles are utilized in order to provide a layer of abstraction from permissions from being applied directly to users. While [...]

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook

    - by ssqa.net
    Its one year on my first book released, keeping aside the financial gains from this book I'm more happy to achieve one of the important goals from my career. This is something big in my life to announce, it gives immensive pleasure and happiness to share about my first book (hard paper) and eBook release, titled : Microsoft SQL Server 2008 R2 Administration Cookbook is released and out now. share my experience and task based real-world best practices in a cookbook style. My thanks to the technical...(read more)

    Read the article

  • Use Those Schemas, People!

    - by BuckWoody
    Database Schemas are just containers – they aren’t users or anything else – think of a sub-directory on the hard drive. In early versions of SQL Server we “hid” schemas, placing all objects under “dbo”, which gave the erroneous perception that Schemas are users. In SQL Server 2005, we “un-hid” or re-introduced schemas within the database. Users can have a default schema (a place where their new objects go), you can add new schemas and transfer objects between them, and they have many other benefits. But I still see a lot of applications, developed by shops I know as well as vendors, that don’t make use of a Schema. Everything is piled under dbo. I completely understand this – since permissions can be granted to a schema, they feel a lot like a user, so it’s just easier not to worry about both users and schemas when you create a database. But if you’ll use them properly you can make your application more understandable and portable. You should at least take a few minutes and read more about them – you owe it to your users: http://msdn.microsoft.com/en-us/library/ms190387.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • I'm blogging again, and about time too

    - by fatherjack
    No, seriously, this one is about time. I recently had an issue in a work database where a query was giving random results, sometimes the query would return a row and other times it wouldn't. There was quite a bit of work distilling the query down to find the reason for this and I'll try to explain by demonstrating what was happening by using some sample data in a table with rather a contrived use case. Let's assume we have a table that is designed to have a start and end date for something, maybe...(read more)

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is the "official" place for community support for the Mere Mortals .NET framework?

    - by Ryan Hayes
    My team is using the Mere Mortals .NET framework from Oak Leaf. Being used to working with primarily open source software, I found it excruciatingly painful to find ANY community support for MM.NET. When I asked if there was any, the only place I was given to look for support was Universal Thread, which is a site which requires a membership for search and archived questions. It seems like a third party, pay-for site should not be the primary source of support for anything like this, especially MM.NET which costs $700 per developer. It doesn' to me like an entire community around MM.NET would choose to all pay on top of the license just to use a forum. If not Universal Thread, then what is the "official" place to find support for the Mere Mortals .NET framework?

    Read the article

  • Database IDs

    - by fatherjack
    Just a quick post, mainly to test out the new blog format but related to a question on the #sqlhelp hashtag. The question came from Justin Dearing (@zippy1981) as: So I take it database_id isn’t an ever incrementing value. #sqlhelp When a new database is created it is given the lowest available ID. This either is in a gap in IDs where a database has been dropped or the database ID is incremented by one from the highest current ID if there are no gaps to fill. To see this in action, connect to your sandbox server and try this: USE MASTER GO CREATE DATABASE cherry GO USE cherry GO SELECT DB_ID() GO CREATE DATABASE grape GO USE grape GO SELECT DB_ID() GO CREATE DATABASE melon GO USE melon GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE grape GO CREATE DATABASE kiwi GO USE kiwi GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE cherry DROP DATABASE melon DROP DATABASE kiwi You should get an incrementing series of database IDs as the databases are created until the last one where the new database gets allocated the ID that is missing because one was dropped.

    Read the article

  • Starting this week: Dublin, Maidenhead, and London

    - by KKline
    This might be most most overcommitted four-week period of time ever in my life. I’m tired just thinking about it! Not only am I traveling internationally and speaking over the next few weeks, I’m also helping on two book projects, learning some new applications from Quest Software, and helping on a small Transact-SQL refactoring project. Swag on hand? I’ve got a special printing of 500 video training DVDs for this trip: SQL Server Training on DMVs Performance Monitor and Wait Events Plus, I’ll have...(read more)

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft SQL Server 2008 R2 Administration Cookbook - Book and eBook expected June 2011. Pre-order now!

    - by ssqa.net
    Over 85 practical recipes for administering a high-performance SQL Server 2008 R2 system. Book and eBook expected June 2011 . Pre-order now! Multi-format orders get free access on PacktLib , This practical cookbook will show you the advanced administration techniques for managing and administering a scalable and high-performance SQL Server 2008 R2 system. It contains over 85 practical, task-based, and immediately useable recipes covering a wide range of advanced administration techniques for administering...(read more)

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >