Search Results

Search found 34476 results on 1380 pages for 'sql blog'.

Page 85/1380 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • T-SQL generated from LINQ to SQL is missing a where clause

    - by Jimmy W
    I have extended some functionality to a DataContext object (called "CodeLookupAccessDataContext") such that the object exposes some methods to return results of LINQ to SQL queries. Here are the methods I have defined: public List<CompositeSIDMap> lookupCompositeSIDMap(int regionId, int marketId) { var sidGroupId = CompositeSIDGroupMaps.Where(x => x.RegionID.Equals(regionId) && x.MarketID.Equals(marketId)) .Select(x => x.CompositeSIDGroup); IEnumerator<int> sidGroupIdEnum = sidGroupId.GetEnumerator(); if (sidGroupIdEnum.MoveNext()) return lookupCodeInfo<CompositeSIDMap, CompositeSIDMap>(x => x.CompositeSIDGroup.Equals(sidGroupIdEnum.Current), x => x); else return null; } private List<TResult> lookupCodeInfo<T, TResult>(Func<T, bool> compLambda, Func<T, TResult> selectLambda) where T : class { System.Data.Linq.Table<T> dataTable = this.GetTable<T>(); var codeQueryResult = dataTable.Where(compLambda) .Select(selectLambda); List<TResult> codeList = new List<TResult>(); foreach (TResult row in codeQueryResult) codeList.Add(row); return codeList; } CompositeSIDGroupMap and CompositeSIDMap are both tables in our database that are represented as objects in my DataContext object. I wrote the following code to call these methods and display the T-SQL generated after calling these methods: using (CodeLookupAccessDataContext codeLookup = new CodeLookupAccessDataContext()) { codeLookup.Log = Console.Out; List<CompositeSIDMap> compList = codeLookup.lookupCompositeSIDMap(5, 3); } I got the following results in my log after invoking this code: SELECT [t0].[CompositeSIDGroup] FROM [dbo].[CompositeSIDGroupMap] AS [t0] WHERE ([t0].[RegionID] = @p0) AND ([t0].[MarketID] = @p1) -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [5] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [3] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[PK_CSM], [t0].[CompositeSIDGroup], [t0].[InputSID], [t0].[TargetSID], [t0].[StartOffset], [t0].[EndOffset], [t0].[Scale] FROM [dbo].[CompositeSIDMap] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.1 The first T-SQL statement contains a where clause as specified and returns one column as expected. However, the second statement is missing a where clause and returns all columns, even though I did specify which rows I wanted to view and which columns were of interest. Why is the second T-SQL statement generated the way it is, and what should I do to ensure that I filter out the data according to specifications via the T-SQL? Also note that I would prefer to keep lookupCodeInfo() and especially am interested in keeping it enabled to accept lambda functions for specifying which rows/columns to return.

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • That’s a wrap! Almost, there’s still one last chance to attend a SQL in the City event in 2012

    - by Red and the Community
    The communities team are back from the SQL in the City multi-city US Tour and we are delighted to have met so many happy SQL Server professionals and Red Gate customers. We set out to run a series of back-to-back events in order to meet, talk to and delight as many SQL Server and Red Gate enthusiasts as possible in 5 different cities in 11 days. We did it! The attendees had a good time too and 99% of them would attend another SQL in the City event in 2013 – so it seems we left an impression. There were a range of topics on the event agenda, ranging from ‘The Whys & Hows of Continuous Integration’, ‘Database Maintenance Essentials’, ‘Red Gate tools – The Complete Lifecycle’, ‘Automated Deployment: Application And Database Releases Without The Headache’, ‘The Ten Commandments of SQL Server Monitoring’ and many more. Videos and slides from the events will be posted to the event website in November, after our last event of 2012. SQL in the City Seattle – November 5 Join us for free and hear from some of the very best names in the SQL Server world. SQL Server MVPs such as; Steve Jones, Grant Fritchey, Brent Ozar, Gail Shaw and more will be presenting at the Bell Harbor conference center for one day only. We’re even taking on board some of the recent attendee-suggestions of how we can improve the events (feedback from the 65% of attendees who came to our US tour events), first off we’re extending the drinks celebration in the evening! Rather than just a 30 minute drink and run, attendees will have up to 2 hours to enjoy free drinks, relax and network in a fantastic environment amongst some really smart like-minded professionals. If you’re interested in expanding your SQL Server knowledge, would like to learn more about Red Gate tools, get yourself registered for the last SQL in the City event of 2012. It’s free, fun and we’re very friendly! I look forward to seeing you in Seattle on Monday November 5. Cheers, Annabel.

    Read the article

  • "Executing SQL directly; no cursor" error when using SCOPE_IDENTITY/IDENT_CURRENT

    - by Chris
    There wasn't much on google about this error, so I'm askin here. I'm switching a PHP web application from using MySQL to SQL Server 2008 (using ODBC, not php_mssql). Running queries or anything else isn't a problem, but when I try to do scope_identity (or any similar functions), I get the error "Executing SQL directly; no cursor". I'm doing this immediately after an insert, so it should still be in scope. Running the same insert statement then query for the insert ID works fine in SQL Server Management Studio. Here's my code right now (everything else in the database wrapper class works fine for other queries, so I'll assume it isn't relevant right now): function insert_id(){ return $this->query_first("SELECT SCOPE_IDENTITY() as insert_id"); } query_first being a function that returns the first result from the first field of a query (basically the equivalent of execute_scalar() on .net). The full error message: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor., SQL state 01000 in SQLExecDirect in C:[...]\Database_MSSQL.php on line 110

    Read the article

  • How to call SQL Function with multiple parameters from C# web page

    - by Marshall
    I have an MS SQL function that is called with the following syntax: SELECT Field1, COUNT(*) AS RecordCount FROM GetDecileTable('WHERE ClientID = 7 AND LocationName = ''Default'' ', 10) The first parameter passes a specific WHERE clause that is used by the function for one of the internal queries. When I call this function in the front-end C# page, I need to send parameter values for the individual fields inside of the WHERE clause (in this example, both the ClientID & LocationName fields) The current C# code looks like this: String SQLText = "SELECT Field1, COUNT(*) AS RecordCount FROM GetDecileTable('WHERE ClientID = @ClientID AND LocationName = @LocationName ',10)"; SqlCommand Cmd = new SqlCommand(SQLText, SqlConnection); Cmd.Parameters.Add("@ClientID", SqlDbType.Int).Value = 7; // Insert real ClientID Cmd.Parameters.Add("@LocationName", SqlDbType.NVarChar(20)).Value = "Default"; // Real code uses Location Name from user input SqlDataReader reader = Cmd.ExecuteReader(); When I do this, I get the following code from SQL profiler: exec sp_executesql N'SELECT Field1, COUNT(*) as RecordCount FROM GetDecileTable (''WHERE ClientID = @ClientID AND LocationName = @LocationName '',10)', N'@ClientID int,@LocationID nvarchar(20)', @ClientID=7,@LocationName=N'Default' When this executes, SQL throws an error that it cannot parse past the first mention of @ClientID stating that the Scalar Variable @ClientID must be defined. If I modify the code to declare the variables first (see below), then I receive an error at the second mention of @ClientID that the variable already exists. exec sp_executesql N'DECLARE @ClientID int; DECLARE @LocationName nvarchar(20); SELECT Field1, COUNT(*) as RecordCount FROM GetDecileTable (''WHERE ClientID = @ClientID AND LocationName = @LocationName '',10)', N'@ClientID int,@LocationName nvarchar(20)', @ClientID=7,@LocationName=N'Default' I know that this method of adding parameters and calling SQL code from C# works well when I am selecting data from tables, but I am not sure how to embed parameters inside of the ' quote marks for the embedded WHERE clause being passed to the function. Any ideas?

    Read the article

  • wordpress.com to self-hosted wordpress blog

    - by sAc
    Hello, I have been writing articles on the wordpress.com blog, now i am looking to move it to self-hosted wordpress blog but i wonder: 1) Should i move all my articles on the new blog or just put an article on my last blog that more articles will be posted on my new blog? 2) If i move all articles on my new blog, i am not sure about how google will react to it because there are articles with good number of visitors, won't this be seo-un-friendly because i am not sure but google will re-create page reputation stuff, etc or those articles will have same popularity even if i move elsewhere? 3) What are the implications and side-effects in moving from wordpress.com blog to self-hosted wordpress blog? Thanks

    Read the article

  • Remote SQL server connection failure

    - by Sevki
    I am trying to connect to my MSSQL server 2008 web instance and im failing horribly... i get the error 26 and before you jump on me i have done these Check the spelling of the SQL Server instance name that is specified in the connection string. Use the SQL Server Surface Area Configuration tool to enable SQL Server to accept remote connections over the TCP or named pipes protocols. For more information about the SQL Server Surface Area Configuration Tool, see Surface Area Configuration for Services and Connections. Make sure that you have configured the firewall on the server instance of SQL Server to open ports for SQL Server and the SQL Server Browser port (UDP 1434). Make sure that the SQL Server Browser service is started on the server. in addition to theese i have disabled the firewall completely and tried other ports nothing works the same credentials work on the server but not on the client. this is the exact error message A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) Can anybody help?

    Read the article

  • Problem building dotnetnuke blog module

    - by GeminiDNK
    I have dnn 5.2 running on my machine, and installed the blog module, the blog module is also reflected in the development environment i have. But i want to find out, how do i actually develope, or build the blog from source, so i can actually output the .dll file, instead of just running it. If i import the Blog project file from the folder, it will give me a host of error like .resx file not found and so on. Does any dotnetnuke expert out there know about this?And have encountered the problem before? If not, how do you actually make the module work so u can make changes to the module. Im talking about core logic change, like adding functions, not just skin or UI changes.

    Read the article

  • FotoJeff.com | My Photography Blog

    - by Jeff Julian
    I have been recently been doing more photography and Flickr was only allowing me to do so much with the images in displaying them.  No customization of skin, no page grouping, no post like pages.  So I decided to host a WordPress blog to host my images.  I really wanted to try WordPress to see what features single-hosted blog products offer that our multiple-hosted blog system could take advantage of.  So far the product is very cool, I can see how such a large developer network would help produce such cool “apps” for WP.  The product makes if very easy to make changes to your hosted environment that would be a little scary for a multiple blog host.  I need to compare features for their hosted solution. Any who, FotoJeff.com is my new photography blog home.  I have been working with the Kansas City Rescue Mission a lot lately so most of my shots are for them. FotoJeff.com – Photography Blog of Jeff Julian My hope is to make this blog again my technology blog, Staff of Geeks our Geekswithblogs.net announcement blog and FotoJeff.com my photography blog.  I need to start dog fooding my thoughts on blogging and keep the noise down (by making more noise with this post :D). Technorati Tags: FotoJeff.com,Photography,Blogs,Geekswithblogs

    Read the article

  • SQL Server Installaion error 0x84B40000

    - by Kurtevich
    I have a problem installing SQL Server 2008 R2. Long time ago I had it installed, and then uninstalled. It was left in "Add/remove programs", but I didn't pay attention on that. I had 2005 installed. And now there is a need to install 2008. I removed 2005 and started installing 2008, but it says that space on C: is not enough. That's when I found out that "Add/remove programs" shows it occupying more than 4 gigabytes, though I used to uninstall it. So I click "Remove", it shows all those many screens and validations, shows that removal completed, but the size of Program Files folder is still more than 4 GB. I removed (from "Add\remove programs" everything that had "SQL Server" in it's name, but that main "SQL Server 2008" item is still there and still 4 GB and uninstalling does nothing. Because installation of SQL Server did not show existing instances, and I don't see any running services related to SQL server (well, almost any, more details in the end), I though that this folder contains just some leftover staff and data and deleted it manually. Then agreed to removing of the item in "Add/remove programs" and everything looks clean. Now every time I try to install SQL Server (even in the minimum configuration), I receive the following error: SQL Server Setup has encountered the following error: The specified credentials that were provided for the SQL Server service are not valid. To continue, provide a valid account and password for the SQL Server service. Error code 0x84B40000. What is this service mentioned here? This error looks like I'm trying to add features to existing server and it can't login. But the setup didn't ask me for any credentials, except one username that couldn't be changed. Here are the services shown that can be related, both disabled and pointing to non-existing executables: SQL Active Directory Helper Service SQL Full-text Filter Daemon Launcher (MSSQLSERVER) I understand that this must be because of my manual deletion, but is there a way to clean it up now?

    Read the article

  • SQL Server 2008 - Editing Tables: Bit columns require 'True' or 'False'

    - by CJM
    Not so much a question as an observation... I'm just upgrading to SQL Server 2008 on my development machine in anticipation of upgrading my live applications. I didn't anticipate any problems since [I think] I generally use standard T-SQL, and probably not too far from ANSI standard SQL. So far so good, but I was really thrown by a very simple change: I was creating a simple, small look-up table to store a list of codes and including a bit column to indicate the current default code. But when I used the new/modified 'Edit Top 200 Rows' option, and entered my 0s and 1s in the the bit column I got an error: 'Invalid value for cell - String was not recognised as a valid boolean' After a bit of head-scratching, I tried True and False - and they worked. So it seems this new Edit feature requires 4 or 5 characters to be typed, rather than the previous 1. Checking further, we can still use '...where bitval = 1' but can now also use '...where bitval = 'true''. But any results returned render these bit columns as 0 or 1 still. It all sounds like half a step backwards. Not the end of the world, but and unnecessary annoyance. Does anybody have any insight on this issue? Or there any other new Gotchas with SQL Server 2008?

    Read the article

  • SQL Developer: Describe versus Ctrl+Click to Open Database Objects

    - by thatjeffsmith
    In yesterday’s post I talked about you could use SQL Developer’s Describe (SHIFT+F4) to open a PL/SQL Package at your cursor. You might get an error if you try to describe this… If you actually try to describe the package as you see it in the above screenshot, you’ll get an error: Doh! I neglected to say in yesterday’s post that I was highlighting the package name before I hit SHIFT+F4. This works just fine, but it will work even better in our next release as we’ve fixed this issue. Until then, you can also try the Ctrl+Hover with your mouse. For PL/SQL calls you can open the source immediately based on what you’re hovering over with your mouse cursor. You could try this with “dbms_output.put_line(” too Ctrl+Click, It’s not just for PL/SQL If you don’t like the floating describe windows you get when you do a SHIFT+F4 on a database object, the ctrl+click will work too. Instead of opening a normal ‘hover’ panel, you’ll be taken directly to the object editor for that table, view, etc. Go ahead and try it right now. Paste this into your worksheet, then ctrl+click with your mouse over the table name: select * from scott.emp And now you know, the rest of the story.

    Read the article

  • LINQ-to-SQL IN/Contains() for Nullable<T>

    - by Craig Walker
    I want to generate this SQL statement in LINQ: select * from Foo where Value in ( 1, 2, 3 ) The tricky bit seems to be that Value is a column that allows nulls. The equivalent LINQ code would seem to be: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value) select foo; This, of course, doesn't compile, since foo.Value is an int? and values is typed to int. I've tried this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); IEnumerable<int?> nullables = values.Select( value => new Nullable<int>(value)); var myFoos = from foo in foos where nullables.Contains(foo.Value) select foo; ...and this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value.Value) select foo; Both of these versions give me the results I expect, but they do not generate the SQL I want. It appears that they're generating full-table results and then doing the Contains() filtering in-memory (ie: in plain LINQ, without -to-SQL); there's no IN clause in the DataContext log. Is there a way to generate a SQL IN for Nullable types?

    Read the article

  • Referencing SQL Server 2008 R2 SMO from Visual Studio 2010

    - by user69508
    Hello. We read a number of things about referencing SQL Server SMO from Visual Studio but still don't have the definite answers we need. So, here it goes... A number of years ago we created a C# application using Visual Studio 2005 and SQL Server 2005. In that application, we added .NET references to a number of SQL Server SMO objects, and everything worked fine. Those references were: Microsoft.SqlServer.ConnectionInfo GAC 9.0.242.0 Microsoft.SqlServer.Smo GAC 9.0.242.0 Microsoft.SqlServer.SqlEnum GAC 9.0.242.0 We have now migrated to Visual Studio 2010 and SQL Server 2008 R2. However, when we try to reference those same SMO objects for SQL Server 2008 R2, they don't appear in the .NET references tab. We're wanting to reference the SQL Server 2008 R2 version of those same SMO assemblies for our upgraded C# application. On our development machines, we have SQL Server 2008 R2 Developer installed with all options, including the SDK such that the assemblies are found in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, my first questions are: Are we supposed to do file references to the SMO assemblies instead of .NET references in Visual Studio 2010 w/ SQL Server 2008 R2? Or, is there some problem with our development machines such that the SMO assemblies are not appearing in the .NET references tab? Next, our production machines will have SQL Server 2008 R2 Workgroup installed with the client tools option selected, thus providing those same SMO assemblies in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, the next questions are: When we release to production, are we supposed to redistribute the SMO assemblies with our application? Or, will our application work on the production servers without redistributing the SMO assemblies (since the client tools/SMO assemblies have been installed)? What else????? Thanks for the help!

    Read the article

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • MS SQL datetime precision problem

    - by Nailuj
    I have a situation where two persons might work on the same order (stored in an MS SQL database) from two different computers. To prevent data loss in the case where one would save his copy of the order first, and then a little later the second would save his copy and overwrite the first, I've added a check against the lastSaved field (datetime) before saving. The code looks roughly like this: private bool orderIsChangedByOtherUser(Order localOrderCopy) { // Look up fresh version of the order from the DB Order databaseOrder = orderService.GetByOrderId(localOrderCopy.Id); if (databaseOrder != null && databaseOrder.LastSaved > localOrderCopy.LastSaved) { return true; } else { return false; } } This works for most of the time, but I have found one small bug. If orderIsChangedByOtherUser returns false, the local copy will have its lastSaved updated to the current time and then be persisted to the database. The value of lastSaved in the local copy and the DB should now be the same. However, if orderIsChangedByOtherUser is run again, it sometimes returns true even though no other user has made changes to the DB. When debugging in Visual Studio, databaseOrder.LastSaved and localOrderCopy.LastSaved appear to have the same value, but when looking closer they some times differ by a few milliseconds. I found this article with a short notice on the millisecond precision for datetime in SQL: Another problem is that SQL Server stores DATETIME with a precision of 3.33 milliseconds (0. 00333 seconds). The solution I could think of for this problem, is to compare the two datetimes and consider them equal if they differ by less than say 10 milliseconds. My question to you is then: are there any better/safer ways to compare two datetime values in MS SQL to see if they are exactly the same?

    Read the article

  • Instead of trigger in SQL Server - looses SCOPE_IDENTITY?

    - by kastermester
    Hey StackOverflow, I am (once again) having some issues with some SQL. I have a table, on which I have created an INSTEAD OF trigger to enforce some buissness rules (rules not really important). This works as intended. My issue is, that now when inserting data into this table, SCOPE_IDENTITY() now returns a NULL value, rather than the actual inserted identity, my guess is that this is because it is now out of scope - but then how do I get this in scope? I am using SQL Server 2008. Per request, here's the SQL: Insert + Scope code INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId]) VALUES ('2009-01-20', '2009-01-31', 6, 1) SELECT SCOPE_IDENTITY() Trigger: CREATE TRIGGER [dbo].[TR_Payments_Insert] ON [dbo].[Payment] INSTEAD OF INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; IF NOT EXISTS(SELECT 1 FROM dbo.Payment p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo) ) AND NOT EXISTS (SELECT 1 FROM Inserted p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND ((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)) ) BEGIN INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId) SELECT DateFrom, DateTo, CustomerId, AdminId FROM Inserted END ELSE BEGIN ROLLBACK TRANSACTION END END The code did work before the creation of this trigger, also I am using LINQ to SQL in C# and as far as I can see, I have no way of changing SCOPE_IDENTITY to @@IDENITY - is there really no way out of this one?

    Read the article

  • How to set minimum SQL Server resource allocation for a database?

    - by Jeff Widmer
    Over the past Christmas holiday week, when the website I work on was experiencing very low traffic, we saw several Request timed out exceptions (one on each day 12/26, 12/28, 12/29, and 12/30) on several pages that require user authentication. We rarely saw Request timed out exceptions prior to this very low traffic week. We believe the timeouts were due to the database that it uses being "spun down" on the SQL Server and taking longer to spin up when a request came in. There are 2 databases on the SQL Server (SQL Server 2005), one which is specifically for this application and the other for the public facing website and for authentication; so in the case where users were not logged into the application (which definitely could have been for several hours at a time over Christmas week) the application database probably received no requests. We think at this point SQL Server reallocated resources to the other database and then when a request came in, extra time was needed to spin up the application database and the timeout occurred. Is there a way to tell SQL Server to give a minimum amount of resources to a database at all times?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >