Search Results

Search found 159 results on 7 pages for 'uniqueidentifier'.

Page 4/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • If I drop my clustered PK and add a new one, what order will my rows be in?

    - by stack
    In SQL Server, I'm looking at TableA, which currently has a uniqueidentifier clustered primary key. The GUID has no meaning in any context. (I'll give you a second to clean up your keyboard and monitor and set down the soda.) I'd like to drop that primary key and add a new unique integer primary key to the table. My question is this: when I drop the index, modify the column from uniqueidentifier to int, and add the new clustered unique primary key to the modified column, will the new PK values be in the order of insertion into the table, or will they be in some other order? Is this the right way to go here? Will this work? (I'm kind of a noobkin with regard to table creation/modification.)

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • Why Could Linq to Sql Submit Changes Fail for Updates Despite Data in Change Set

    - by KevDog
    I'm updating a set of objects, but the update fails on a SqlException that says "Incorrect Syntax near 'Where'". So I crack open SqlProfiler, and here is the generated SQL: exec sp_executesql N'UPDATE [dbo].[Addresses] SET WHERE ([AddressID] = @p0) AND ([StreetAddress] = @p1) AND ([StreetAddress2] = @p2) AND ([City] = @p3) AND ([State] = @p4) AND ([ZipCode] = @p5) AND ([CoordinateID] = @p6) AND ([CoordinateSourceID] IS NULL) AND ([CreatedDate] = @p7) AND ([Country] = @p8) AND (NOT ([IsDeleted] = 1)) AND (NOT ([IsNonSACOGZip] = 1))',N'@p0 uniqueidentifier,@p1 varchar(15),@p2 varchar(8000),@p3 varchar(10),@p4 varchar(2),@p5 varchar(5),@p6 uniqueidentifier,@p7 datetime,@p8 varchar(2)',@p0='92550F32-D921-4B71-9622-6F1EC6123FB1',@p1='125 Main Street',@p2='',@p3='Sacramento',@p4='CA',@p5='95864',@p6='725E7939-AEE3-4EF9-A033-7507579B69DF',@p7='2010-06-15 14:07:51.0100000',@p8='US' Sure enough, no set statement. I also called context.GetChangeSet() and the proper values are in the updates section. Also, I checked the .dbml file and all of the properties Update Check values are 'Always'. I am completely baffled on this one, any help out there?

    Read the article

  • Reference-counted object is used after it is released

    - by EndyVelvet
    Doing code analysis of the project and get the message "Reference-counted object is used after it is released" on the line [defaults setObject: deviceUuid forKey: @ "deviceUuid"]; I watched this topic Obj-C, Reference-counted object is used after it is released? But the solution is not found. ARC disabled. // Get the users Device Model, Display Name, Unique ID, Token & Version Number UIDevice *dev = [UIDevice currentDevice]; NSString *deviceUuid; if ([dev respondsToSelector:@selector(uniqueIdentifier)]) deviceUuid = dev.uniqueIdentifier; else { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; id uuid = [defaults objectForKey:@"deviceUuid"]; if (uuid) deviceUuid = (NSString *)uuid; else { CFStringRef cfUuid = CFUUIDCreateString(NULL, CFUUIDCreate(NULL)); deviceUuid = (NSString *)cfUuid; CFRelease(cfUuid); [defaults setObject:deviceUuid forKey:@"deviceUuid"]; } } Please help find the cause.

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • ... i just avoid GUID

    - by Tomaz.tsql
    Our partner was explaining to me that they are using GUID as primary key on all the tables. My immediate reaction was - why? and couple of basic doubts were: - since I can read uniqueidentifier, it does not tell me absolutely anything - if I will use my relational table, i sure will use other columns to get the information out - SQL is terrible when setting up clustered index on GUID columns (and hence performance problems) - why not use INT? it will save you space on disk, optimizer will be able...(read more)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Setting up your project

    - by ssoolsma
    Before any coding we first make sure that the project is setup correctly. (Please note, that this blog is all about how I do it, and incase i forget, i can return here and read how i used to do it. Maybe you come up with some idea’s for yourself too.) In these series we will create a minigolf scoring cart. Please note that we eventually create a fully functional application which you cannot use unless you pay me alot of money! (And i mean alot!)   1. Download and install the appropriate tools. Download the following: - TestDriven.Net (free version on the bottom of the download page) - nUnit TestDriven is a visual studio plugin for many unittest frameworks, which allows you to run  / test code very easily with a right click –> run test. nUnit is the test framework of choice, it works seamless with TestDriven.   2. Create your project Fire up visual studio and create your DataAccess project:  MidgetWidget.DataAccess is it’s name. (I choose MidgetWidget as name for the solution). Also, make sure that the MidgetWidget.DataAccess project is a c# ClassLibary Hit OK to create the solution. (in the above example the checkbox Create directory for solution is checked, because i’m pointing the location to the root of c:\development where i want MidgetWidget to be created.   3. Setup the database. You should have thought about a database when you reach this point. Let’s assume that you’ve created a database as followed: Table name: LoginKey Fields: Id (PK), KeyName (uniqueidentifier), StartDate (datetime), EndDate (datetime) Table name:  Party Fields: Id (PK), Key (uniqueidentifier, Created (datetime) Table name:  Person Fields: Id(PK),  PartyId (int), Name (varchar) Tablename: Score Fields: Id (PK), Trackid (int), PersonId (int), Strokes (int) Tablename: Track Fields: Id (PK), Name (varchar) A few things to take note about the database setup. I’ve singularized all tablenames (not “Persons“ but “Person”. This is because in a few minutes, when this is in our code, we refer to the database objects as single rows. We retrieve a single Person not a single “Persons” from the database.   4. Create the entity framework In your solution tree create a new folder and call it “DataModel”. Inside this folder: Add new item –> and choose ADO.NET Entity Data Model. Name it “Entities.edmx” and hit  “Add”. Once the edmx is added, open it (double click) and right click the white area and choose “Update model from database…". Now, point it to your database (i include sensitive data in the connectionstring) and select all the tables. After that hit “Finish” and let the entity framework do it’s code generation. Et Voila, after a few seconds you have set up your entity model. Next post we will start building the data-access! I’m off to the beach.

    Read the article

  • Is this a bug? : I get " The type ... is not a complex type or an entity type" in my WCF data servic

    - by veertien
    When invoking a query on the data service I get this error message inside the XML feed: <m:error> <m:code></m:code> <m:message xml:lang="nl-NL">Internal Server Error. The type 'MyType' is not a complex type or an entity type.</m:message> </m:error> When I use the example described here in the article "How to: Create a Data Service Using the Reflection Provider (WCF Data Services)" http://msdn.microsoft.com/en-us/library/dd728281(v=VS.100).aspx it works as expected. I have created the service in a .NET 4.0 web project. My data context class returns a query object that is derived from the LINQExtender (http://linqextender.codeplex.com/). When I execute the query object in a unit test, it works as expected. My entity type is defined as: [DataServiceKey("Id")] public class Accommodation { [UniqueIdentifier] [OriginalFieldName("EntityId")] public string Id { get; set; } [OriginalFieldName("AccoName")] public string Name { get; set; } } (the UniqueIdentifier and OriginalFieldName attributes are used by LINQExtender) Does anybody know if this is a bug in WCF data services or am I doing something wrong?

    Read the article

  • Establishing Upper / Lower Bound in T-SQL Procedure

    - by Code Sherpa
    Hi. I am trying to establish upper / lower bound in my stored procedure below and am having some problems at the end (I am getting no results where, without the temp table inner join i get the expected results). I need some help where I am trying to join the columns in my temp table #PageIndexForUsers to the rest of my join statement and I am mucking something up with this statement: INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) I could use feedback at this point - and, any advice on how to improve my procedure's logic (if you see anything else that needs improvement) is also appreciated. Thanks in advance... ALTER PROCEDURE dbo.wb_Membership_GetAllUsers @ApplicationName nvarchar(256), @sortOrderId smallint = 0, @PageIndex int, @PageSize int AS BEGIN DECLARE @ApplicationId uniqueidentifier SELECT @ApplicationId = NULL SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications WHERE LOWER(@ApplicationName) = LoweredApplicationName IF (@ApplicationId IS NULL) RETURN 0 -- Set the page bounds DECLARE @PageLowerBound int DECLARE @PageUpperBound int DECLARE @TotalRecords int SET @PageLowerBound = @PageSize * @PageIndex SET @PageUpperBound = @PageSize - 1 + @PageLowerBound BEGIN TRY -- Create a temp table TO store the select results CREATE TABLE #PageIndexForUsers ( IndexId int IDENTITY (0, 1) NOT NULL, UserId uniqueidentifier ) -- Insert into our temp table INSERT INTO #PageIndexForUsers (UserId) SELECT u.UserId FROM dbo.aspnet_Membership m, dbo.aspnet_Users u WHERE u.ApplicationId = @ApplicationId AND u.UserId = m.UserId ORDER BY u.UserName SELECT @TotalRecords = @@ROWCOUNT SELECT dbo.wb_Profiles.profileid, dbo.wb_ProfileData.firstname, dbo.wb_ProfileData.lastname, dbo.wb_Email.emailaddress, dbo.wb_Email.isconfirmed, dbo.wb_Email.emaildomain, dbo.wb_Address.streetname, dbo.wb_Address.cityorprovince, dbo.wb_Address.state, dbo.wb_Address.postalorzip, dbo.wb_Address.country, dbo.wb_ProfileAddress.addresstype,dbo.wb_ProfileData.birthday, dbo.wb_ProfileData.gender, dbo.wb_Session.sessionid, dbo.wb_Session.lastactivitydate, dbo.aspnet_Membership.userid, dbo.aspnet_Membership.password, dbo.aspnet_Membership.passwordquestion, dbo.aspnet_Membership.passwordanswer, dbo.aspnet_Membership.createdate FROM dbo.wb_Profiles INNER JOIN dbo.wb_ProfileAddress ON ( dbo.wb_Profiles.profileid = dbo.wb_ProfileAddress.profileid AND dbo.wb_ProfileAddress.addresstype = 'home' ) INNER JOIN dbo.wb_Address ON dbo.wb_ProfileAddress.addressid = dbo.wb_Address.addressid INNER JOIN dbo.wb_ProfileData ON dbo.wb_Profiles.profileid = dbo.wb_ProfileData.profileid INNER JOIN dbo.wb_Email ON ( dbo.wb_Profiles.profileid = dbo.wb_Email.profileid AND dbo.wb_Email.isprimary = 1 ) INNER JOIN dbo.wb_Session ON dbo.wb_Profiles.profileid = dbo.wb_Session.profileid INNER JOIN dbo.aspnet_Membership ON dbo.wb_Profiles.userid = dbo.aspnet_Membership.userid INNER JOIN dbo.aspnet_Users ON dbo.aspnet_Membership.UserId = dbo.aspnet_Users.UserId INNER JOIN dbo.aspnet_Applications ON dbo.aspnet_Users.ApplicationId = dbo.aspnet_Applications.ApplicationId INNER JOIN #PageIndexForUsers ON ( dbo.aspnet_Users.UserId = #PageIndexForUsers.UserId AND #PageIndexForUsers.IndexId >= @PageLowerBound AND #PageIndexForUsers.IndexId <= @PageUpperBound ) ORDER BY CASE @sortOrderId WHEN 1 THEN dbo.wb_ProfileData.lastname WHEN 2 THEN dbo.wb_Profiles.username WHEN 3 THEN dbo.wb_Address.postalorzip WHEN 4 THEN dbo.wb_Address.state END END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK TRAN EXEC wb_ErrorHandler RETURN 55555 END CATCH RETURN @TotalRecords END GO

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Code Keeps Timing Out

    - by DForck42
    So, we've got this set of code that, for some reason, keeps timing out. It's not the stored procedure that it's running, because that runs fine. Also, if we remove the parameter from the c# code, the code runs. The parameter keeps breaking (causing it to time out) and we can't figure out why. c#: public static PTWViewList GetList(int studynumber) { PTWViewList tempList = new PTWViewList(); using (SqlConnection myConnection = new SqlConnection(AppConfiguration.cnARDB)) { string spName = "ardb.PTWViewSelect"; SqlCommand myCommand = new SqlCommand(spName, myConnection); myCommand.CommandType = CommandType.StoredProcedure; myCommand.Parameters.AddWithValue("@study", studynumber); myConnection.Open(); using (NullableDataReader myReader = new NullableDataReader(myCommand.ExecuteReader())) /*this is where the code times out*/ { tempList = new PTWViewList(); while (myReader.Read()) { tempList.Add(FillDataRecord(myReader)); } myReader.Close(); } } tempList.ListCount = tempList.Count; return tempList; } stored procedure: CREATE PROCEDURE [ardb].[PTWViewSelect] @studynumber int = NULL, @quoteid uniqueidentifier = NULL, @lineitemid uniqueidentifier = NULL AS BEGIN SET NOCOUNT ON; SELECT [Study] ,[LineItemID] ,[QuoteID] ,[Total] ,[COOP] ,[VendorCost] ,[CustCost] ,[LineItemNumber] ,[StudyTypeCode] ,[GroupLeader] ,[PTWDate] ,[PONumber] ,[POStatus] ,[StudyDirector] ,[SL_DESC_L] ,[SL_Code] ,ProjectDescription ,CreatedBy ,chARProcess ,CODate FROM [ARDB].[dbo].[PTWView] WHERE (@studynumber is null or StudyNumber=@studynumber) AND (@quoteid is null or QuoteID=@quoteid) AND (@lineitemid is null or LineItemID = @lineitemid) END

    Read the article

  • INSTEAD OF triggers do not support direct recursion

    - by senzacionale
    ALTER TRIGGER [dbo].[TRG_DeleteUser] ON [dbo].[Users] INSTEAD OF DELETE AS BEGIN SET NOCOUNT ON DECLARE @AspNetUserGuid UniqueIdentifier DECLARE @UserId NVARCHAR(36) BEGIN SET @AspNetUserGuid = (SELECT AspNetUserGuid FROM deleted) SET @UserId = (SELECT UserId FROM dbo.Users WHERE AspNetUserGuid = @AspNetUserGuid) IF @AspNetUserGuid IS NOT NULL AND @UserId IS NOT NULL BEGIN EXECUTE [dbo].UsersDelete @AspNetUserGuid, @UserId END END SET NOCOUNT OFF END problem is here: EXECUTE [dbo].UsersDelete @AspNetUserGuid, @UserId i need to call triger before row is actually deleted-

    Read the article

  • StoreGeneratedPattern T4 EntityFramework concern

    - by LoganWolfer
    Hi everyone, Here's the situation : I use SQL Server 2008 R2, SQL Replication, Visual Studio 2010, EntityFramework 4, C# 4. The course-of-action from our DBA is to use a rowguid column for SQL Replication to work with our setup. These columns need to have a StoreGeneratedPattern property set to Computed on every one of these columns. The problem : Every time the T4 template regenerate our EDMX (ADO.NET Entity Data Model) file (for example, when we update it from our database), I need to go manually in the EDMX XML file to add this property to every one of them. It has to go from this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" /> To this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" StoreGeneratedPattern="Computed"/> The solution : I'm trying to find a way to customize an ADO.NET EntityObject Generator T4 file to generate a StoreGeneratedPattern="Computed" to every rowguid that I have. I'm fairly new to T4, I only did customization to AddView and AddController T4 templates for ASP.NET MVC 2, like List.tt for example. I've looked through the EF T4 file, and I can't seem to find through this monster where I could do that (and how). My best guess is somewhere in this part of the file, line 544 to 618 of the original ADO.NET EntityObject Generator T4 file : //////// //////// Write PrimitiveType Properties. //////// private void WritePrimitiveTypeProperty(EdmProperty primitiveProperty, CodeGenerationTools code) { MetadataTools ef = new MetadataTools(this); #> /// <summary> /// <#=SummaryComment(primitiveProperty)#> /// </summary><#=LongDescriptionCommentElement(primitiveProperty, 1)#> [EdmScalarPropertyAttribute(EntityKeyProperty=<#=code.CreateLiteral(ef.IsKey(primitiveProperty))#>, IsNullable=<#=code.CreateLiteral(ef.IsNullable(primitiveProperty))#>)] [DataMemberAttribute()] <#=code.SpaceAfter(NewModifier(primitiveProperty))#><#=Accessibility.ForProperty(primitiveProperty)#> <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.Escape(primitiveProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(primitiveProperty))#>get { <#+ if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> return StructuralObject.GetValidValue(<#=code.FieldName(primitiveProperty)#>); <#+ } else { #> return <#=code.FieldName(primitiveProperty)#>; <#+ } #> } <#=code.SpaceAfter(Accessibility.ForSetter((primitiveProperty)))#>set { <#+ if (ef.IsKey(primitiveProperty)) { if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> if (!StructuralObject.BinaryEquals(<#=code.FieldName(primitiveProperty)#>, value)) <#+ } else { #> if (<#=code.FieldName(primitiveProperty)#> != value) <#+ } #> { <#+ PushIndent(CodeRegion.GetIndent(1)); } #> <#=ChangingMethodName(primitiveProperty)#>(value); ReportPropertyChanging("<#=primitiveProperty.Name#>"); <#=code.FieldName(primitiveProperty)#> = StructuralObject.SetValidValue(value<#=OptionalNullableParameterForSetValidValue(primitiveProperty, code)#>); ReportPropertyChanged("<#=primitiveProperty.Name#>"); <#=ChangedMethodName(primitiveProperty)#>(); <#+ if (ef.IsKey(primitiveProperty)) { PopIndent(); #> } <#+ } #> } } private <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.FieldName(primitiveProperty)#><#=code.StringBefore(" = ", code.CreateLiteral(primitiveProperty.DefaultValue))#>; partial void <#=ChangingMethodName(primitiveProperty)#>(<#=code.Escape(primitiveProperty.TypeUsage)#> value); partial void <#=ChangedMethodName(primitiveProperty)#>(); <#+ } Any help would be appreciated. Thanks in advance. EDIT : Didn't find answer to this problem yet, if anyone have ideas to automate this, would really be appreciated.

    Read the article

  • How to escape forward slash?

    - by AndrewB
    I have the following sql command through code and because the parameter contains a forward slash when I evaluate the sql row after the update the column is just empty. sqlCommand.CommandText = String.Format("update {0} set {1}='{2}'where id = @Id", tableName, ColumnName, forwardSlashText); sqlCommand.Parameters.Add("@Id", SqlDbType.UniqueIdentifier).Value = rowId; numRowsAffected = sqlCommand.ExecuteNonQuery(); adding a log.debug to this command i get the following output... update my_table_name set mime_type='application/pdf' where id = @Id So i would assume that the command is correct, but then looking at the row the mime_type column is empty.

    Read the article

  • Why does a newly created EF-entity throw an ID is null exception when trying to save?

    - by Richard
    I´m trying out entity framework included in VS2010 but ´ve hit a problem with my database/model generated from the graphical interface. When I do: user = dataset.UserSet.CreateObject(); user.Id = Guid.NewGuid(); dataset.UserSet.AddObject(user); dataset.SaveChanges(); {"Cannot insert the value NULL into column 'Id', table 'BarSoc2.dbo.UserSet'; column does not allow nulls. INSERT fails.\r\nThe statement has been terminated."} The table i´m inserting into looks like so: -- Creating table 'UserSet' CREATE TABLE [dbo].[UserSet] ( [Id] uniqueidentifier NOT NULL, [Name] nvarchar(max) NOT NULL, [Username] nvarchar(max) NOT NULL, [Password] nvarchar(max) NOT NULL ); GO -- Creating primary key on [Id] in table 'UserSet' ALTER TABLE [dbo].[UserSet] ADD CONSTRAINT [PK_UserSet] PRIMARY KEY CLUSTERED ([Id] ASC); GO Am I creating the object in the wrong way or doing something else basic wrong?

    Read the article

  • updateBinaryStream on ResultSet with FileStream

    - by Lieven Cardoen
    If I try to update a FileStream column I get following error: com.microsoft.sqlserver.jdbc.SQLServerException: The result set is not updatable. Code: System.out.print("Now, let's update the filestream data."); FileInputStream iStream = new FileInputStream("c:\\testFile.mp3"); rs.updateBinaryStream(2, iStream, -1); rs.updateRow(); iStream.close(); Why is this? Table in Sql Server 2008: CREATE TABLE [BinaryAssets].[BinaryAssetFiles]( [BinaryAssetFileId] [int] IDENTITY(1,1) NOT NULL, [FileStreamId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Blob] [varbinary](max) FILESTREAM NULL, [HashCode] [nvarchar](100) NOT NULL, [Size] [int] NOT NULL, [BinaryAssetExtensionId] [int] NOT NULL, Query used in Java: String strCmd = "select BinaryAssetFileId, Blob from BinaryAssets.BinaryAssetFiles where BinaryAssetFileId = 1"; stmt = con.createStatement(); rs = stmt.executeQuery(strCmd);

    Read the article

  • How do I get a single value from a stored proc using Nettiers

    - by Micah
    I have a really simple stored procedure that looks like this: CREATE PROCEDURE _Visitor_GetVisitorIDByVisitorGUID ( @VisitorGUID AS UNIQUEIDENTIFIER ) AS DECLARE @VisitorID AS bigint SELECT @VisitorID = VisitorID FROM dbo.Visitor WHERE VisitorGUID = @VisitorGUID --Here's what I've tried RETURN @VisitorID 'Returns an IDataReader SELECT @VisitorID 'Returns an IDataReader --I've also set it up wuth a single output --parameter, but that means I need to pass --the long in by ref and that's hideous to me I'm trying to get nettiers to generate a method with this signature: public long VisitorService.GetVisitorIDByVisitorGUID(GUID visitorGUID); Basically I want Nettiers to call ExecuteScalar instead of ExecuteReader. What am I doing wrong?

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >