Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 214/1156 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • MS SQL server: Single or multiple instances?

    - by Hugo Riley
    How costly (CPU or memory wise) is it to have multiple instances of SQL server 2005 instead of only one instance with prefixed databases? A company have three application providers. They each will install one application and they each require two or three databases. Should they all use the same instance or should every provider use it's own named instance? Is there any strong reason for one or other setup?

    Read the article

  • SQL Monitoring Overview

    - by user45237
    Hi I currently loook after 20 odd databases in SQL server 2005 and need a tool for monitoring the performance and keep me informed if a database is running slow. Is there anything I can run within Managment studio of any other good third party tool (Pref free) that can do the job. Thanks

    Read the article

  • SQL Server 2008 Restore hangs on 100%

    - by CL4NCY
    Hi, I have backed up a large database from SQL 2005 and am trying to restore it to a SQL 2008 database. It seems to work ok until it gets to 100% when it hangs indefinitely. I've managed to restore smaller databases to this server ok. Any ideas?

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Trash Destination Adapter

    The Trash Destination and this article came from early experiences of using SSIS and community feedback at the time. When developing a package it is very useful to have a destination adapter that does nothing but consume rows with no setup requirement. You often want run a package part way through development, or just add a path so you can set a Data Viewer. There are stock tasks that can be used, but with the Trash Destination all columns are treated as selected automatically (usage type of read-only), so the pipeline knows they are required. It is also obvious that this is for development or diagnostic purposes, and is clearly not a part of the functional design of the package. It is also ideal for just playing around and exploring concepts in SSIS, and is often used in conjunction with the Data Generator Source. Using these two components it is easy to setup a test of an expression in the Derived Column Transformation for example. The Data Generator Source provides some dummy data, and the Trash Destination allows you to anchor the output path and set a Data Viewer to examine the results. It can also be used when performance tuning packages. It is a consistent and known quantity that has no external influences, so it is ideal as a destination when breaking the data flow into sections to isolate a bottleneck. The adapter is really simple to use and requires no setup. Simply drop it onto the pipeline designer and use it to terminate your data flow path. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally, for 2005/2008, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trash Destination transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The Trash Destination is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Trash Destination for SQL Server 2005 Trash Destination for SQL Server 2008 Trash Destination for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.34 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.33 - SQL Server 2008 release. Includes support for upgrade of 2005 packages. RTM compatible, previously February 2008 CTP. (4 Mar 2008) Version 2.0.0.31 - SQL Server 2008 November 2007 CTP. (14 Feb 2008) SQL Server 2005 Version 1.0.2.18 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.1.1 - SQL Server 2005 IDW 15 June CTP. Minor enhancements over v1.0.1.0. (11 Jun 2005) Version 1.0.1.0 - SQL Server 2005 IDW 14 April CTP. First Public Release. (30 May 2005) Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The component could not be added to the Data Flow task. Please verify that this component is properly installed. ------------------------------ ADDITIONAL INFORMATION: The data flow object "Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=1.0.1.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc" is not installed correctly on this computer. (Microsoft.DataTransformationServices.Design) For 2005/2008, once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? This is not necessary for SQL Server 2012 as the new SSIS toolbox automatically detects components. If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • How to archive data from a table to a local or remote database in SQL 2005 and SQL 2008

    - by simonsabin
    Often you have the need to archive data from a table. This leads to a number of challenges 1. How can you do it without impacting users 2. How can I make it transactionally consistent, i.e. the data I put in the archive is the data I remove from the main table 3. How can I get it to perform well Points 1 is very much tied to point 3. If it doesn't perform well then the delete of data is going to cause lots of locks and thus potentially blocking. For points 1 and 3 refer to my previous posts DELETE-TOP-x-rows-avoiding-a-table-scan and UPDATE-and-DELETE-TOP-and-ORDER-BY---Part2. In essence you need to be removing small chunks of data from your table and you want to do that avoiding a table scan. So that deals with the delete approach but archiving is about inserting that data somewhere else. Well in SQL 2008 they introduced a new feature INSERT over DML (Data Manipulation Language, i.e. SQL statements that change data), or composable DML. The ability to nest DML statements within themselves, so you can past the results of an insert to an update to a merge. I've mentioned this before here SQL-Server-2008---MERGE-and-optimistic-concurrency. This feature is currently limited to being able to consume the results of a DML statement in an INSERT statement. There are many restrictions which you can find here http://msdn.microsoft.com/en-us/library/ms177564.aspx look for the section "Inserting Data Returned From an OUTPUT Clause Into a Table" Even with the restrictions what we can do is consume the OUTPUT from a DELETE and INSERT the results into a table in another database. Note that in BOL it refers to not being able to use a remote table, remote means a table on another SQL instance. To show this working use this SQL to setup two databases foo and fooArchive create database foo go --create the source table fred in database foo select * into foo..fred from sys.objects go create database fooArchive go if object_id('fredarchive',DB_ID('fooArchive')) is null begin     select getdate() ArchiveDate,* into fooArchive..FredArchive from sys.objects where 1=2       end go And then we can use this simple statement to archive the data insert into fooArchive..FredArchive select getdate(),d.* from (delete top (1)         from foo..Fred         output deleted.*) d         go In this statement the delete can be any delete statement you wish so if you are deleting by ids or a range of values then you can do that. Refer to the DELETE-TOP-x-rows-avoiding-a-table-scan post to ensure that your delete is going to perform. The last thing you want to do is to perform 100 deletes each with 5000 records for each of those deletes to do a table scan. For a solution that works for SQL2005 or if you want to archive to a different server then you can use linked servers or SSIS. This example shows how to do it with linked servers. [ONARC-LAP03] is the source server. begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d commit transaction and to prove the transactions work try, you should get the same number of records before and after. select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d rollback transaction   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive The transactions are very important with this solution. Look what happens when you don't have transactions and an error occurs   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*                     raiserror (''Oh doo doo'',15,15)') d                     select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive Before running this think what the result would be. I got it wrong. What seems to happen is that the remote query is executed as a transaction, the error causes that to rollback. However the results have already been sent to the client and so get inserted into the

    Read the article

  • SQL Server - Schema/Code Analysis Rules - What would your rules include?

    - by Randy Minder
    We're using Visual Studio Database Edition (DBPro) to manage our schema. This is a great tool that, among the many things it can do, can analyse our schema and T-SQL code based on rules (much like what FxCop does with C# code), and flag certain things as warnings and errors. Some example rules might be that every table must have a primary key, no underscore's in column names, every stored procedure must have comments etc. The number of rules built into DBPro is fairly small, and a bit odd. Fortunately DBPro has an API that allows the developer to create their own. I'm curious as to the types of rules you and your DB team would create (both schema rules and T-SQL rules). Looking at some of your rules might help us decide what we should consider. Thanks - Randy

    Read the article

  • Error 18456. State 6 "Attempting to use an NT account name with SQL Server Authentication."

    - by Aragorn
    2010-05-06 17:21:22.30 Logon Error: 18456, Severity: 14, State: 6. 2010-05-06 17:21:22.30 Logon Login failed for user . Reason: Attempting to use an NT account name with SQL Server Authentication. [CLIENT: ] The authentication mode is "Mixed". And its MS SQL Server 2008. What might be the issue? Do you think the user name was not configured properly? Is there any link available for giving the right privileges and configuring the user account? So that I can check the rights and privileges for the acc I am using... thanks

    Read the article

  • How do you implement caching in Linq to SQL?

    - by Glenn Slaven
    We've just started using LINQ to SQL at work for our DAL & we haven't really come up with a standard for out caching model. Previously we had being using a base 'DAL' class that implemented a cache manager property that all our DAL classes inherited from, but now we don't have that. I'm wondering if anyone has come up with a 'standard' approach to caching LINQ to SQL results? We're working in a web environment (IIS) if that makes a difference. I know this may well end up being a subjective question, but I still think the info would be valuable. EDIT: To clarify, I'm not talking about caching an individual result, I'm after more of an architecture solution, as in how do you set up caching so that all your link methods use the same caching architecture.

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • SQL Server 2008: Table Valued Parameters

    In SQL Server 2005 and earlier, it is not possible to pass a table variable as a parameter to a stored procedure. When multiple rows of data to SQL Server need to send multiple rows of data to SQL Server, developers either had to send one row at a time or come up with other workarounds to meet requirements. While a VB.Net developer recently informed me that there is a SQLBulkCopy object available in .Net to send multiple rows of data to SQL Server at once, the data still can not be passed to a stored proc.Possibly the most anticipated T-SQL feature of SQL Server 2008 is the new Table-Valued Parameters. This is the ability to easily pass a table to a stored procedure from T-SQL code or from an application as a parameter.

    Read the article

  • How to issue SQL Server lock hints in WCF Entity Framework?

    - by TMN
    I'm just learning the WCF entity framework (and .net in general), and I'm running into a problem specifying lock hints in an embedded SQL query. I'm trying to specify a query that has lock hints in it (e.g., "SELECT * FROM xyz WITH(XLOCK, ROWLOCK)") and I keep getting errors from the runtime that the query syntax is not valid. The query works if I enter it directly into SQL Server. Is there some special syntax for lock hints when creating IQueryable result sets, or do I have to somehow mess with the transaction isolation level (probably not possible, as I need to specify different lock hints on a subquery within the main query).

    Read the article

  • SQL Server Migration Assistant 2008 (SSMA)

    One of my client’s requirements is to migrate and consolidate his company departments’ databases to SQL Server 2008. As I know the environment, they are using MySQL , MS-Access and SQL Server with different applications. Now the company has decided to have a single dedicated SQL Server 2008 database server to host all the applications. So there are a few things to do to upgrade and migrate from MySQL and MS-Access to SQL Server 2008. For the migration task, I found the SQL Server Migration Assistant 2008 (SSMA 2008) is very useful which reduces the effort and risk of migration. So in this tip, I will do an overview of SSMA 2008. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Uninstalling a SQL Server Clustered Instance

    I have installed and uninstalled several instances of SQL Server in the past. Today, I need to uninstall a SQL Server 2008 R2 clustered instance. I have never uninstalled a clustered instance of SQL Server before. Can you provide a how-to guide to uninstall a clustered instance of SQL Server 2008 R2? NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Is SQL Server Compact *ever* OK to use on a web server?

    - by Troy
    I'd like to host a personal database for about 500 people (1 database per person) on my web server. A person could possibly share their database with a friend or two. On my web server, I'd have some web services, perhaps for doing synchronization, and maybe even an ASP.NET Web Forms front end to the database. If I restart my IIS web server, would this likely cause database corruptions? Would this be expected to perform ok? If not Compact, should I instead use SQL Express and create a database in SQL Server for each of my 500 people?

    Read the article

  • Which Namespaces Must Be Used to Connect to SQL Server with ADO.NET?

    - by every_answer_gets_a_point
    i am using this example to connect c# to sql server. can you please tell me what i have to include in order to be able to use sqlconnection? it must be something like: using Sqlconnection; ??? string connectionString = @"Data Source=.\SQLEXPRESS;AttachDbFilename=""C:\SQL Server 2000 Sample Databases\NORTHWND.MDF"";Integrated Security=True;Connect Timeout=30;User Instance=True"; SqlConnection sqlCon = new SqlConnection(connectionString); sqlCon.Open(); string commandString = "SELECT * FROM Customers"; SqlCommand sqlCmd = new SqlCommand(commandString, sqlCon); SqlDataReader dataReader = sqlCmd.ExecuteReader(); while (dataReader.Read()) { Console.WriteLine(String.Format("{0} {1}", dataReader["CompanyName"], dataReader["ContactName"])); } dataReader.Close(); sqlCon.Close();

    Read the article

  • How do I convert an arbitrary list of strings to a SQL rowset?

    - by Jim Kiley
    I have a simple list of strings, which may be of arbitrary length. I'd like to be able to use this list of strings as I would use a rowset. The app in question is running against SQL Server. To be clearer, if I do SELECT 'foo', 'bar', 'baz' I get 'foo', 'bar', and 'baz' as fields in a single row. I'd like to see each one of them as a separate row. Is there a SQL (or SQLServer-specific) function or technique that I'm missing, or am I going to have to resort to writing a function in an external scripting language?

    Read the article

  • SQL Server (TSQL) - Is it possible to EXEC statements in parallel?

    - by Investor5555
    SQL Server 2008 R2 Here is a simplified example: EXECUTE sp_executesql N'PRINT ''1st '' + convert(varchar, getdate(), 126) WAITFOR DELAY ''000:00:10''' EXECUTE sp_executesql N'PRINT ''2nd '' + convert(varchar, getdate(), 126)' The first statement will print the date and delay 10 seconds before proceeding. The second statement should print immediately. The way T-SQL works, the 2nd statement won't be evaluated until the first completes. If I copy and paste it to a new query window, it will execute immediately. The issue is that I have other, more complex things going on, with variables that need to be passed to both procedures. What I am trying to do is: Get a record Lock it for a period of time while it is locked, execute some other statements against this record and the table itself Perhaps there is a way to dynamically create a couple of jobs? Anyway, I am looking for a simple way to do this without having to manually PRINT statements and copy/paste to another session. Is there a way to EXEC without wait / in parallel?

    Read the article

  • SQL Server 2008: The columns in table do not match an existing primary key or unique constraint

    - by 109221793
    Hi guys, I need to make some changes to a SQL Server 2008 database. This requires the creation of a new table, and inserting a foreign key in the new table that references the Primary key of an already existing table. So I want to set up a relationship between my new tblTwo, which references the primary key of tblOne. However when I tried to do this (through SQL Server Management Studio) I got the following error: The columns in table 'tblOne' do not match an existing primary key or UNIQUE constraint I'm not really sure what this means, and I was wondering if there was any way around it?

    Read the article

  • Is it possible to deny access to SQL Server from specific programs?

    - by Paul McLoughlin
    Currently one of our databases (SQL Server 2008) is accessed via a number of different mechanisms: .Net SqlClient Data Provider; SQL Server Management Studio; various .Net applications and 2007 Microsoft Office system (basically Excel). I see that in the sys.dm_exec_sessions DMV it is possible to see the program name of the program accessing the database for the current sessions. My question is: is it possible for one to deny access from a particular named program? First prize would be if this could be done for any named program, but we would gain a great deal of benefit from being able to deny access to this specific database from all Microsoft Office applications (especially Excel).

    Read the article

  • SQL Server, fetching data from multiple joined tables. Why is slow?

    - by user562192
    I have problem with performance when retrieving data from SQL Server. My sql query looks something like this: SELECT table_1.id, table_1.value, table_2.id, table_2.value,..., table_20.id, table_20.value From table_1 INNER JOIN table_2 ON table_1.id = table_2.table_1_id INNER JOIN table_3 ON table_2.id = table_3.table_2_id... WHERE table_1.row_number BETWEEN 1 AND 20 So, I am fetching 20 results. This query takes about 5 seconds to execute. When I select only table_1.id, it returns results instantly. Because of that, I guess that problem is not in JOINs, it is in retrieving data from multiple tables. Any suggestions how I would speed up this query?

    Read the article

  • SQL JOIN with two or more tables as output - most efficient way?

    - by littlegreen
    I have an SQL query that executes a LEFT JOIN on another table, then outputs all results that could be coupled into a designated table. I then have a second SQL query that executes the LEFT JOIN again, then outputs the results that could not be coupled to a designated table. In code, this is something like: INSERT INTO coupledrecords SELECT b.col1, b.col2... s.col1, s.col2... FROM bigtable AS b LEFT JOIN smallertable AS s ON criterium WHERE s.col1 IS NOT NULL INSERT INTO notcoupledrecords SELECT b.col1, b.col2... bigtable AS b LEFT JOIN smallertable AS s ON criterium WHERE s.col1 IS NULL My question: I now have to execute the JOIN two times, in order to achieve what I want. I have a feeling that this is twice as slow as it could be. Is this true, and if yes, is there a way to do it more efficiently?

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >