Search Results

Search found 5101 results on 205 pages for 'ssms 2005'.

Page 27/205 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How do I check if a SQL Server 2005 TEXT column is not null or empty using LINQ To Entities?

    - by emzero
    Hi there guys I'm new to LINQ and I'm trying to check whether a TEXT column is null or empty (as String.IsNullOrEmpty). from c in ... ... select new { c.Id, HasBio = !String.IsNullOrEmpty(c.bio) } Trying to use the above query produces an SqlException: Argument data type text is invalid for argument 1 of len function. The SQL generated is similar to the following: CASE WHEN ( NOT (([Extent2].[bio] IS NULL) OR (( CAST(LEN([Extent2].[bio]) AS int)) = 0))) THEN cast(1 as bit) WHEN (([Extent2].[bio] IS NULL) OR (( CAST(LEN([Extent2].[bio]) AS int)) = 0)) THEN cast(0 as bit) END AS [C1] LEN is not applicable to TEXT columns. I know DATALENGTH should be used for them... How can I force LINQ to produce such thing? Or any other workaround to test if a text column is null or empty??? Thanks!

    Read the article

  • Disable auto-indent certain items in Visual Studio 2005?

    - by Jakobud
    I don't mind most of the way that VS2005 auto-indents (or auto-formats) my C++ code, but certain items I don't want it to automatically indent. Like #define statements for example. It takes them and shoves them all the way to the left side of the screen, no matter how deep into my scope I type them. That's really really annoying. Is there someway to alter this behavior, besides completely disabling auto-indent/format?

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • Can I encrypt value in C# and use that with SQL Server 2005 symmetric encryption?

    - by Robert Byrne
    To be more specific, if I create a symmetric key with a specific KEY_SOURCE and ALGORITHM (as described here), is there any way that I can set up the same key and algorithm in C# so that I can encrypt data in code, but have that data decrypted by the symmetric key in Sql Server? From the research I've done so far, it seems that the IDENTITY_VALUE for the key is also baked into the cypher text, making things even more complex. I'm thinking about just trying all the various ways I can think of, ie hashing the KEY_SOURCE using different hash algorithms for a key and trying different ways of encrypting the plain text until I get something that works. Or is that just futile? Has anyone else done this, any pointers? UPDATE Just to clarify, I want to use NHibernate on the client side, but theres a bunch of stored procedures on the database side that still perform decryption.

    Read the article

  • How to make awkward pivot of sql table in SQL Server 2005?

    - by Oliver
    I have to rotate a given table from an SQL Server but a normal pivot just doesn't work (as far as i tried). So has anybody an idea how to rotate the table into the desired format? Just to make the problem more complicated, the list of given labels can vary and it is possible that a new label name can come into at any given time. Given Data ID | Label | Numerator | Denominator | Ratio ---+-----------------+-------------+---------------+-------- 1 | LabelNameOne | 41 | 10 | 4,1 1 | LabelNameTwo | 0 | 0 | 0 1 | LabelNameThree | 21 | 10 | 2,1 1 | LabelNameFour | 15 | 10 | 1,5 2 | LabelNameOne | 19 | 19 | 1 2 | LabelNameTwo | 0 | 0 | 0 2 | LabelNameThree | 15 | 16 | 0,9375 2 | LabelNameFive | 19 | 19 | 1 2 | LabelNameSix | 17 | 17 | 1 3 | LabelNameOne | 12 | 12 | 1 3 | LabelNameTwo | 0 | 0 | 0 3 | LabelNameThree | 11 | 12 | 0,9167 3 | LabelNameFour | 12 | 12 | 1 3 | LabelNameSix | 0 | 1 | 0 Wanted result ID | ValueType | LabelNameOne | LabelNameTwo | LabelNameThree | LabelNameFour | LabelNameFive | LabelNameSix ---+-------------+--------------+--------------+----------------+---------------+---------------+-------------- 1 | Numerator | 41 | 0 | 21 | 15 | | 1 | Denominator | 10 | 0 | 10 | 10 | | 1 | Ratio | 4,1 | 0 | 2,1 | 1,5 | | 2 | Numerator | 19 | 0 | 15 | | 19 | 17 2 | Denominator | 19 | 0 | 16 | | 19 | 17 2 | Ratio | 1 | 0 | 0,9375 | | 1 | 1 3 | Numerator | 12 | 0 | 11 | 12 | | 0 3 | Denominator | 12 | 0 | 12 | 12 | | 1 3 | Ratio | 1 | 0 | 0,9167 | 1 | | 0

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • What does really happen when we do a BEGIN TRAN in SQL Server 2005?

    - by Misnomer
    Hi all, I came across this issue or maybe something I didn't realize but I did a Begin Tran and had some code inside it and never ran a commit or rollback as I forgot about it. That caused all many of the database queries or even just a simple select top 1000 command were just sitting on loading..? Now it probably has put some locks on the tables I guess since it did not let me query them..but I just wanted to know what exactly happened and what are the practices to be followed here ?

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • SQL 2005 - any way to restore/copy a diagram?

    - by NealWalters
    I used the Redgate packager (ran MSI) to reset all the data in my database (i.e. I deleted everything, and let it build the new database). Unfortunately, I discovered that it didn't retain my diagrams, which has a nice arrangement and several annotations. Is there any way to copy/migrate/script the diagram from one database to another (the databases have identical structures). Thanks, Neal Walters

    Read the article

  • How to send reminder notifications to subscribers for renewal from SQL Server 2005?

    - by codemonkie
    I have a table in SQL DB, namely dbo.subscribers, it contains following columns: -SubscriberID -JoinDateTime The business logic says a subscription last for 2 weeks and a reminder should be sent after 7 days from the JoinDateTime. The way that the system was designed to send reminders are via a URL call, e.g. http://xxx.xxx.xxx.xxx/renew_userid=SubscriberID/ and that can only be called from our webserver which is the only whitelisted IP machines given. Currently there is a windows service written to query the DB once a day at midnight to grab all expiring subscribers and send them reminders, however this batch approach only sends reminders to the nearest date, well, I could have set the interval down from 1 day to 1 hour such that the service can send notifications out closer to the exact JoinDateTime + 7 days requirements. I have heard a stored procedure can be written and perform task like this to a nearly real-time manner, if yes, please give me some hints on how to do it. Another question is - is SSRS bit of an overkill to perform things like this? Please advice. TIA

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • How to switch sql-2005 Select Case When T-sql Naming?

    - by soe
    select (Case Remark01 When 'l1' then type1 when 'l2' then type2 end) AS [?] --Remark .....want to switch name in here from mytable Example .... select (Case level When 'l1' then type1 ('l1' mean check constant string) when 'l2' then type2(('l2' mean check constant string)) end) AS (Case when 'l1' then [type01] Else [type02]) from mytable select level,type1,type2 from mytable I using two program this mytable one program is want to show menu only type1 only one program is want to show menu only type2 only I using one view using two program..

    Read the article

  • MS SQL 2005: Why would a delete from a temp table hang forever?

    - by Dr. Zim
    DELETE FROM #tItem_ID WHERE #tItem_ID.Item_ID NOT IN ( SELECT DISTINCT Item_ID FROM Item_Keyword JOIN Keyword ON Item_Keyword.Keyword_ID = Keyword.Record_ID WHERE Keyword LIKE @tmpKW) The Keyword is %macaroni%. Both temp tables are 3300 items long. The inner select executes in under a second. All strings are nvarchar(249). All IDs are int. Any ideas? I executed it (it's in a stored proc) for over 12 minutes without it finishing.

    Read the article

  • Best way to move a bunch of SQL Server 2005 tables to another Server?

    - by Mikecancook
    I've been looking for a way to move a bunch of tables, more than 40, to another server with all the data in them. I've looked around for scripts to generate inserts but so far I'd have to run them once for every table, then copy all the scripts over and then run them on the server. Seems like there is a better way. --Update-- My strategy for doing this may have been for naught. The end script, using MS SQL Server Publishing Wizard and Red Gates SQL Data Compare (excellent tool, btw) results in a file over 1GB. This makes my system plead for mercy and I'm not willing to risk crashing a clients server just opening the file to run it. I may have to rethink this whole thing and break down to just individual per table scripts. I'm not looking forward to that.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >