Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 77/1156 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Determining threshold for lock escalation

    - by Davin
    I have a table with around 2.5 millions records and will be updating around 700k of them and want to update these while still allowing other users to see the data. My update statement looks something like this: UPDATE A WITH (UPDLOCK,ROWLOCK) SET A.field = B.field FROM Table_1 A INNER JOIN Table2 B ON A.id = B.id WHERE A.field IS NULL AND B.field IS NOT NULL I was wondering if there was any way to work out at what point sql server will escalate a lock placed on an update statement (as I don't want the whole table to be locked)? I don't have permissions to run a server trace to see how the locks are being applied, so is there any other way of knowing at what point the lock will be escalated to cover the whole table? Thanks!

    Read the article

  • Is there a way to rewrite the SQL query efficiently

    - by user320587
    hi, I have two tables with following definition TableA TableB ID1 ID2 ID3 Value1 Value ID1 Value1 C1 P1 S1 S1 C1 P1 S2 S2 C1 P1 S3 S3 C1 P1 S5 S4 S5 The values are just examples in the table. TableA has a clustered primary key ID1, ID2 & ID3 and TableB has p.k. ID1 I need to create a table that has the missing records in TableA based on TableB The select query I am trying to create should give the following output C1 P1 S4 To do this, I have the following SQL query SELECT DISTINCT TableA.ID1, TableA.ID2, TableB.ID1 FROM TableA a, TableB b WHERE TableB.ID1 NOT IN ( SELECT DISTINCT [ID3] FROM TableA aa WHERE a.ID1 == aa.ID1 AND a.ID2 == aa.ID2 ) Though this query works, it performs poorly and my final TableA may have upto 1M records. is there a way to rewrite this more efficiently. Thanks for any help, Javid

    Read the article

  • Attaching catalog with SQL Authentication credentials attaches it as Read-Only

    - by Nissim
    Hello, As part of our product's installation process, a database is attached to the server. We use EXEC sp_attach_db in order to attach it to MSSQL. The problem occures when we try to attach it with "SQL Authentication" connection string - the database is attached to the server as read-only, thus preventing any write access from being performed This is driving us nuts... it's working just fine with Windows Authentication, and the only difference is the connection string... I tried googling for it but no mention for such a scenario is found. Any ideas anyone? It's important to mention that the MDF/LDF physical files are not set with "ReadOnly" attribute, so this is not the problem.

    Read the article

  • SQL Drop Index on different Database

    - by Jim
    While trying to optimize SQL scripts, I was recommended to add indexes. What is the easiest way to specify what Database the index should be on? IF EXISTS (SELECT * FROM sysindexes WHERE NAME = 'idx_TableA') DROP INDEX TableA.idx_TableA IF EXISTS (SELECT * FROM sysindexes WHERE NAME = 'idx_TableB') DROP INDEX TableB.idx_TableB In the code aboce, TableA is in DB-A, and TableB is in DB-B. I get the following error when I change DROP INDEX TableA.idx_TableA to DROP INDEX DB-A.dbo.TableA.idx_TableA Msg 166, Level 15, State 1, Line 2 'DROP INDEX' does not allow specifying the database name as a prefix to the object name. Any thoughts are appreciated.

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • The Visual Studio development environment crashes when you open Visual Studio 2005, Visual Studio 20

    983279 ... The Visual Studio development environment crashes when you open Visual Studio 2005, Visual Studio 2008, or Visual Studio 2010This RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Visual Studio development environment crashes when you open Visual Studio 2005, Visual Studio 20

    983279 ... The Visual Studio development environment crashes when you open Visual Studio 2005, Visual Studio 2008, or Visual Studio 2010This RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • An access violation occurs when you run a Visual C++ application in Visual Studio 2005 SP1

    980422 ... An access violation occurs when you run a Visual C++ application in Visual Studio 2005 SP1This RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Grant Showplan : MS SQL Server 2005

    SQL version applied to: 2005 Grant Showplan The SHOWPLAN permission only governs who can run the various SET SHOWPLAN statements. There is no impact on performance of this. And with some of the SHOWPLAN statement in effect, the statement(s) is not executed and goes through compilation phase only.  read moreBy Sachin DiwakerDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ANN: free DPack for VS.NET 2003/2005 v2.5.1 (3 replies)

    Hi all, I'm pleased to announce a new v2.5.1 availability of our free VS.NET 2003 and VS 2005 add ons collection called DPack. DPack includes various browser tools that allow the developer to quickly find solution's code members, types or files. Additional tools such as numbered bookmarks, solution backup and statistics, to name the few, are included as well. You can check the change log, see the ...

    Read the article

  • ANN: free DPack for VS.NET 2003/2005 v2.5.1 (3 replies)

    Hi all, I'm pleased to announce a new v2.5.1 availability of our free VS.NET 2003 and VS 2005 add ons collection called DPack. DPack includes various browser tools that allow the developer to quickly find solution's code members, types or files. Additional tools such as numbered bookmarks, solution backup and statistics, to name the few, are included as well. You can check the change log, see the ...

    Read the article

  • Oracle Magazine, July/August 2005

    Oracle Magazine July/August 2005 features articles on the IT challenges and solutions of small and midsize businesses, Linux clusters as data warehousing solutions, EJB 3.0, developing PHP applications against Oracle XML DB, Oracle LogMiner, Oracle JDeveloper and Oracle ADF, and much more.

    Read the article

  • Oracle Magazine, May/June 2005

    Oracle Magazine May/June 2005 features articles on the architecture of service-oriented applications, RFID and Oracle, UML 2.0, Oracle XML DB Repository, Oracle Flashback, Oracle Expression Filter, Oracle JDeveloper and Oracle ADF, and much more.

    Read the article

  • Oracle Magazine, September/October 2005

    Oracle Magazine September/October 2005 features articles on the release of Oracle Database 10g Release 2, Oracle Fusion Middleware, PeopleSoft Enterprise CRM, Transparent Data Encryption (TDE), native XQuery support in Oracle Database 10g Release 2, Oracle Data Provider for .NET, Oracle JDeveloper, Oracle ADF, and much more.

    Read the article

  • Oracle Magazine, March/April 2005

    Oracle Magazine March/April 2005 features articles on managing unstructured content, cooridinating business processes, Oracle's Austin Data Center, starting with Oracle ADF, Oracle XML Data Synthesis, SQL analytics, using materialized views, and much more.

    Read the article

  • SSRS 2005 - Usability analysis - Is SSRS a good option for this scenario?

    - by Sach
    How practical is it to consider SSRS 2005 or SSRS 2008 as a reporting solution for a report that has to show reports with millions of records (records vary from 3 to 10 million)? Is there any threshold on the size of report in SSRS? How do I know that for a huge report, wheather SSRS will consume the whole memory and start paging the operations to disk or it will give a memory leak error? Even if I keep on increasing the memory how can I be sure that certain memory will be sufficient for such huge reports for the report server? All the above questions are haunting me because I have a dedicated report server with a decent hardware and OS configuration (8 processors, 8GB RAM, 64 bit OS and 64 bit SQL Server 2005). Still my report with around 2 million records is taking more than 6 minutes and going from one page to another takes 3 minutes!!! My datasource is on separate server and when I execute only the stored proc there, it returns the results in less than 2 minutes.

    Read the article

  • Hourly SQL Server 2005 Slowness (Possibly caused by SYSTEM)

    - by Zorlack
    We're trying to diagnose the cause of slowness on our Database server. We're running the latest rev SQL Server 2005 on Windows 2008x64. The behavior that we're seeing is this: We see the SYSTEM process spike one of the CPUs for about 2 minutes, during this time SQL server slows down by a factor of 10. The slowness lasts until SYSTEM is done, then in an hour everything starts again. During these slowdowns disk writes don't spike, paging doesn't spike, the only noticeable precursor we see is that SYSTEM maxes out one of the sixteen (HT)CPUs. Note that this doesn't happen at the top of the hour, it just happens once an hour, and it shifts a bit depending on the length of the incident. At the moment this is causing intermittent slowdowns, but when the server is really busy it can cause Worker Thread starvation. The server is a Dual Quad Dell R710 with 96GB of RAM and RAID10 data/log disks. Has anyone experienced this kind of problem? Does anyone know where we should look?

    Read the article

  • How do I get source file information with dumpbin /symbols when compiling with VS 2005?

    - by Thomas Dartsch
    I have a tool which uses the output of dumpbin /symbols to do some dependency analysis with our C/C++ libraries. When we compiled the libs with VS 6.0, the dumpbin COFF SYMBOL TABLE contained entries like 000 00000008 DEBUG notype Filename | .file x:\mydir\mysource.c allowing me to get the relationship between sources and defined/used symbols, which is essential for my tool. When we compile with VS 2005, these entries are missing. When I look at the libs with a hex editor, it seems that there is no filename information at all included in the binary files, so it seems not to be a dumbin problem but is compilation related. So I'm looking for a way to get the Filename entries back into my libraries when compiling with VS 2005.

    Read the article

  • "select * from table" vs "select colA,colB,etc from table" interesting behaviour in SqlServer2005

    - by kristof
    Apology for a lengthy post but I needed to post some code to illustrate the problem. Inspired by the question What is the reason not to use select * ? posted a few minutes ago, I decided to point out some observations of the select * behaviour that I noticed some time ago. So let's the code speak for itself: IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[starTest]') AND type in (N'U')) DROP TABLE [dbo].[starTest] CREATE TABLE [dbo].[starTest]( [id] [int] IDENTITY(1,1) NOT NULL, [A] [varchar](50) NULL, [B] [varchar](50) NULL, [C] [varchar](50) NULL ) ON [PRIMARY] GO insert into dbo.starTest(a,b,c) select 'a1','b1','c1' union all select 'a2','b2','c2' union all select 'a3','b3','c3' go IF EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[vStartest]')) DROP VIEW [dbo].[vStartest] go create view dbo.vStartest as select * from dbo.starTest go go IF EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[vExplicittest]')) DROP VIEW [dbo].[vExplicittest] go create view dbo.[vExplicittest] as select a,b,c from dbo.starTest go select a,b,c from dbo.vStartest select a,b,c from dbo.vExplicitTest IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[starTest]') AND type in (N'U')) DROP TABLE [dbo].[starTest] CREATE TABLE [dbo].[starTest]( [id] [int] IDENTITY(1,1) NOT NULL, [A] [varchar](50) NULL, [B] [varchar](50) NULL, [D] [varchar](50) NULL, [C] [varchar](50) NULL ) ON [PRIMARY] GO insert into dbo.starTest(a,b,d,c) select 'a1','b1','d1','c1' union all select 'a2','b2','d2','c2' union all select 'a3','b3','d3','c3' select a,b,c from dbo.vExplicittest select a,b,c from dbo.vStartest If you execute the following query and look at the results of last 2 select statements, the results that you will see will be as follows: select a,b,c from dbo.vExplicittest a1 b1 c1 a2 b2 c2 a3 b3 c3 select a,b,c from dbo.vStartest a1 b1 d1 a2 b2 d2 a3 b3 d3 As you can see in the results of select a,b,c from dbo.vStartest the data of column c has been replaced with the data from colum d. I believe that is related to the way the views are compiled, my understanding is that the columns are mapped by column indexes (1,2,3,4) as apposed to names. I though I would post it as a warning for people using select * in their sql and experiencing unexpected behaviour. Note: If you rebuild the view that uses select * each time after you modify the table it will work as expected

    Read the article

  • Is it possible to control the destination folder when checking out a project from VSS 2005?

    - by swolff1978
    We are currently using VSS 2005 for source control - and please let me start by saying, I've read a lot of posts on Stackoverflow and I realize VSS is the devil. That being said... its what we have to work with now and I have a question about the checkout process. We have the code organized in a certain hierarchy on the vss server, but when we do a checkout we don't need that same hierarchy on our machines. Is there a way to control how visual studio 2008 and vss 2005 create the checkout destination folder so that I don't end up with the code being 9 folders deep on MY machine?

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >