Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 56/1156 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Alter Stored Procedure in SQL Replication

    - by Refracted Paladin
    How do I, properly, ALTER a StoredProcedure in a SQL 2005 Merge Replication? I just need to add a Column. I already successfully added it to the Table and I now need to add it to a SP. I did so but now it will not synchronize with the following error -- Insert Error: Column name or number of supplied values does not match table definition. (Source: MSSQLServer, Error number: 213)

    Read the article

  • Delete data from a SQL Server database on a full partition

    - by aleroot
    I have a SQL Server 2005 Database on a dedicated partition, during the time the database grown and now it have occupied all the space on the partition, now the problem is that the only operation I can do on the database is detach, but i want to remove old data from some tables to save space ... How can I remove old data from the database if SQL Server interface doesn't allow to run queries on it ?

    Read the article

  • Transaction Log filling up on SQL database set to Simple

    - by Will
    We have a database on a SQL 2005 server that is set to Simple transaction mode. The logging is set to 1 MB and is set to grow by 10% when it needs to. We keep running into an issue where the transaction log fills up and we need to shrink it. What could cause the transaction log to fill up when its set to Simple and unrestricted growth is allowed?

    Read the article

  • How can I get Virtual Server 2005 R2 running on Windows Server 2008 R2?

    - by Bret Fisher
    For various reasons (old VT-less hardware, and .vhd support) we need to still run Virtual Server 2005 R2. It's just for lab/demo work but we'd like to run the host on the newest Windows OS possible. It's documented and at least partially supported to run the old Virtual Server 2005 R2 SP1 on Windows Server 2008 (non-R2). I've done that before. I'm wondering if anyone has gotten the scenario in the title above to work. This post says it's possible but has anyone here actually done it before I go through that process: http://blogs.infosupport.com/blogs/ericd/archive/2009/08/31/running-virtual-server-2005-r2-sp1-on-windows-server-2008-r2.aspx

    Read the article

  • to_date in MS SQL Server 2005

    - by Chin
    Does any one know how I would have to change the following to work with ms sql? WHERE registrationDate between to_date ('2003/01/01', 'yyyy/mm/dd') AND to_date ('2003/12/31', 'yyyy/mm/dd'); What I have read implies I would have to construct it using DATEPART() which could become very long winded. Especially when the goal would be to compare on dates which I receive in the following format "2003-12-30 10:07:42". It would be nice to pass them off to the database as is. Any pointers appreciated.

    Read the article

  • SQL Server union selects built dynamically from list of words

    - by Adam Tuttle
    I need to count occurrence of a list of words across all records in a given table. If I only had 1 word, I could do this: select count(id) as NumRecs where essay like '%word%' But my list could be hundreds or thousands of words, and I don't want to create hundreds or thousands of sql requests serially; that seems silly. I had a thought that I might be able to create a stored procedure that would accept a comma-delimited list of words, and for each word, it would run the above query, and then union them all together, and return one huge dataset. (Sounds reasonable, right? But I'm not sure where to start with that approach...) Short of some weird thing with union, I might try to do something with a temp table -- inserting a row for each word and record count, and then returning select * from that temp table. If it's possible with a union, how? And does one approach have advantages (performance or otherwise) over the other?

    Read the article

  • Storing SQL queries in Table in sql server

    - by Rohit
    We have multiple jobs in our system.These jobs are listed in a grid. We have 3 different user types (usertypeid 1,2,3). For each user listing is different and he can filter listing by selecting view from a dropdown. ViewName in the below table is the view which needs to be displayed. To achieve this functionality, a fellow developer has created the following table structure and stored sql fragments in SQLExpression in the below table. According to me the query should not be stored in database. What are the pros and cons of this approach and what are the available alternatives? JobListingViewID ViewName SQLExpression UserTypeID 3 All Jobs 1 = 1 3 4 Error Jobs JobStatusID IN ( 2 ) 1 5 Error Jobs JobStatusID IN ( 2 ) 2 6 Error Jobs JobStatusID IN ( 2 ) 3 7 Speech JobStatusID IN ( 1, 3, 8 ) 1

    Read the article

  • SSRS 2005 concat values in dropdown parameter

    - by Mac
    Hi All, I have a dataset which I want to use as a parameter for my chart in SSRS. The puzzle that I am trying to solve is as below. My DataSet has 4 columns and in the Dropdown parameter I can only specify label and value. I am not able to specify two columns as Label and two columns as value. Does anyone know how to concat the values in the dropdown? I don't want to concat these values in my sql query generating the dataset.

    Read the article

  • Disable Primary Key and Re-Enable After SQL Bulk Insert

    - by Jon
    I am about to run a massive data insert into my DB. I have managed to work out how to enable and rebuild non-clustered indexes on my tables but I also want to disable/enable primary keys. You can't disable the clustered index for the primary key as the table is inaccessible when that is done and my attempt to do a ALTER TABLE for constraints does not work as I think that is only for foreign keys. Do you know of a way to Disable the Primary Key and Re-Enable After SQL Bulk Insert. NOTE: This is over numerous tables and so I don't know the exact primary key specifications eg/name etc

    Read the article

  • Linq to SQl Stored Procedure Problem( it can't figure out the return type)

    - by chobo2
    Hi I have this SP USE [Test] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[UsersInsert](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(UserId,UserName,LicenseId,Password,PasswordSalt,Email,IsApproved,IsLockedOut,CreateDate, LastLoginDate,LastLockOutDate,FailedPasswordAttempts,RoleId) SELECT @UpdatedProdData.value('(/ArrayOfUsers/Users/UserId)[1]', 'uniqueidentifier'), @UpdatedProdData.value('(/ArrayOfUsers/Users/UserName)[1]', 'varchar(20)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LicenseId)[1]', 'varchar(50)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/Password)[1]', 'varchar(128)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/PasswordSalt)[1]', 'varchar(128)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/Email)[1]', 'varchar(50)'), @UpdatedProdData.value('(/ArrayOfUsers/Users/IsApproved)[1]', 'bit'), @UpdatedProdData.value('(/ArrayOfUsers/Users/IsLockedOut)[1]', 'bit'), @UpdatedProdData.value('(/ArrayOfUsers/Users/CreateDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LastLoginDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/LastLockOutDate)[1]', 'datetime'), @UpdatedProdData.value('(/ArrayOfUsers/Users/FailedPasswordAttempts)[1]', 'int'), @UpdatedProdData.value('(/ArrayOfUsers/Users/RoleId)[1]', 'int') Now this SP creates just fine. It's when I go to VS2010 and try to drag this SP in my method panel of my linq to sql file in design view. It tells me that it can't figure out the return type. I try to go to the properties but it does not have "none" as a choice and I can't type it in. It should be "none" so how do I set it to "none"?

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • Select the latest record for each category linked available on an object

    - by Simpleton
    I have a tblMachineReports with the columns: Status(varchar),LogDate(datetime),Category(varchar), and MachineID(int). I want to retrieve the latest status update from each category for every machine, so in effect getting a snapshot of the latest statuses of all the machines unique to their MachineID. The table data would look like Category - Status - MachineID - LogDate cata - status1 - 001 - date1 cata - status2 - 002 - date2 catb - status3 - 001 - date2 catc - status2 - 002 - date4 cata - status3 - 001 - date5 catc - status1 - 001 - date6 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 Restated, I have multiple machines reporting on multiple statuses in this tblMachineReports. All the rows are created through inserts, so their will obviously be duplicate entries for machines as new statuses come in. None of the columns can be predicted, so I can't do any ='some hard coded string' comparisons in any part of the select statement. For the sample table I provided, the desired results would look like: Category - Status - MachineID - LogDate catc - status2 - 002 - date4 cata - status3 - 001 - date5 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 What would the select statement look like to achieve this, getting the latest status for each category on each machine, using MS SQL Server 2008? I have tried different combinations of subqueries combined with aggregate MAX(LogDates)'s, along with joins, group bys, distincts, and what-not, but have yet to find a working solution.

    Read the article

  • Quick Outline: Navigating Your PL/SQL Packages in Oracle SQL Developer

    - by thatjeffsmith
    If you’re browsing your packages using the Connections panel, you have a nice tree navigator to click around your packages and your variable, procedure, and functions. Click, click, click all day long, click, click, click while I sing this song… But What if you drill into your PL/SQL source from the worksheet and don’t have the Tree expanded? Let’s say you’re working on your script, something like - Hmm, what goes next again? So I need to reacquaint myself with just what my beer package requires, so I’m going to drill into it by doing a DESCRIBE (via SHIFT+F4), and now I have the package open. The package is open but the tree hasn’t auto-expanded. Please don’t tell me I have to do the click-click-click thing in the tree!?! Just Open the Quick Outline Panel Do you see it? Just right click in the procedure editor – select the ‘Quick Outline’ in the context menu, and voila! The navigational power of the tree, without needing to drill down the tree itself. If I want to drill into my procedure declaration, just click on said procedure name in the Quick Outline panel. This works for both package specs and bodies. Technically you can use this for stand alone procedures and functions, but the real power is demonstrated for packages.

    Read the article

  • Searching Your PL/SQL Source with Oracle SQL Developer

    - by thatjeffsmith
    Version 3.2.1 included a few tweaks along with several hundred bug fixes. One of those tweaks was the addition of ‘ALL_SOURCE’ as a selection for the Type drop down in the Find Database Object panel. Scroll ALL the way down to the bottom Searching the database for your code or objects can be expensive. The ALL_SOURCE view comes in pretty handy when I want to demo how to cancel long running queries or the Task Progress panel – did you know you can manage all of your long running queries here? Yeah, don’t run this I pretty much hosed our demo pod at Open World b/c I ran that same query but added an ORDER BY b.TEXT DESC to the query and blew up the TEMP space and filled the primary partition on the image. Fun stuff. Anyways, where was I going with this? Oh yeah, searching ALL_SOURCE can be expensive. So we took it out of the product for awhile. And now it’s back in. If you select the ‘ALL’ field, it doesn’t actually search EVERYTHING, because that would probably be less than helpful. So if you want to search your PL/SQL objects for a scrap or bit of code, use the ‘ALL_SOURCE’ option in v3.2.1 Double-Click on the search results to go to the code you’re looking for. Be careful what you search for. Just like any query, it could take awhile.

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • CONTAINSTABLE with wildcard works different in SQLServer 2005 and SQLServer 2008?

    - by musuk
    I have two same databases one on SQLServer 2005 and one on SqlServer 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQLServer 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQLServer 2005 works perfectly and finds necessary record. SQLServer 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • Optimizing T-SQL where an array would be nice

    - by Polatrite
    Alright, first you'll need to grab a barf bag. I've been tasked with optimizing several old stored procedures in our database. This SP does the following: 1) cursor loops through a series of "buildings" 2) cursor loops through a week, Sunday-Saturday 3) has a huge set of IF blocks that are responsible for counting how many Objects of what Types are present in a given building Essentially what you'll see in this code block is that, if there are 5 objects of type #2, it will increment @Type_2_Objects_5 by 1. IF @Number_Type_1_Objects = 0 BEGIN SET @Type_1_Objects_0 = @Type_1_Objects_0 + 1 END IF @Number_Type_1_Objects = 1 BEGIN SET @Type_1_Objects_1 = @Type_1_Objects_1 + 1 END IF @Number_Type_1_Objects = 2 BEGIN SET @Type_1_Objects_2 = @Type_1_Objects_2 + 1 END IF @Number_Type_1_Objects = 3 BEGIN SET @Type_1_Objects_3 = @Type_1_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_1] IF @Number_Type_2_Objects = 0 BEGIN SET @Type_2_Objects_0 = @Type_2_Objects_0 + 1 END IF @Number_Type_2_Objects = 1 BEGIN SET @Type_2_Objects_1 = @Type_2_Objects_1 + 1 END IF @Number_Type_2_Objects = 2 BEGIN SET @Type_2_Objects_2 = @Type_2_Objects_2 + 1 END IF @Number_Type_2_Objects = 3 BEGIN SET @Type_2_Objects_3 = @Type_2_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_2] In addition to being extremely hacky (and limited to a quantity of 20 objects), it seems like a terrible way of handling this. In a traditional language, this could easily be solved with a 2-dimensional array... objects[type][quantity] += 1; I'm a T-SQL novice, but since writing stored procedures often uses a lot of temporary tables (which could essentially be a 2-dimensional array) I was wondering if someone could illuminate a better way of handling a situation like this with two dynamic pieces of data to store. Requested in comments: The columns are simply Number_Type_1_Objects, Number_Type_2_Objects, Number_Type_3_Objects, Number_Type_4_Objects, Number_Type_5_Objects, and CurrentDateTime. Each row in the table represents 5 minutes. The expected output is to figure out what percentage of time a given quantity of objects is present throughout each day. Sunday - Object Type 1 0 objects - 69 rows, 5:45, 34.85% 1 object - 85 rows, 7:05, 42.93% 2 objects - 44 rows, 3:40, 22.22% On Sunday, there were 0 objects of type 1 for 34.85% of the day. There was 1 object for 42.93% of the day, and 2 objects for 22.22% of the day. Repeat for each object type.

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >