Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 38/1254 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Problem with importing an mdf created with SQL Server Express 2008 into SQL Server 2005

    - by user252160
    The question is probably extremely easy to resolve, but I need to resolve it because I need to carry on with my project. I am using SQL Server Express 2008 at home, and I've been working on an ASP.NET MVC app that stores my DB in an mdf file in the project's folder. The problem is that the SQL Server in the Uni labs is SQL Server 2005, and when I try to open the mdf file with the VS Server Explorer,It says that the version of the mdf file is more than the server can accept. The only option that comes to my mind is exporting the DB as an sql file, just like I've done it thousand times with phpmyadmin. the thing is that the SQL Management Studio Express is not the most usable tool in the world, and for some strange reason all the articles I could find in Google were irrelevant. Please, help.

    Read the article

  • Cumulative Updates for SQL Server 2005 SP3 and 2008 R2 are available

    - by AaronBertrand
    After releasing SQL Server 2005 SP4 on Friday (you can read more about it here ), the release services team did not hang up their hats for the Christmas holiday, but continued working on other releases. KB #2438344 : Cumulative update package 13 for SQL Server 2005 Service Pack 3 This brings your SQL Server 2005 SP3 build number to 9.00.4315 KB #2438347 : Cumulative Update package 5 for SQL Server 2008 R2 This brings your SQL Server 2008 R2 build number to 10.5.1753 NOTES: You will not be able to...(read more)

    Read the article

  • Recap - SQL Saturday 151 in Orlando

    - by KKline
    It's always a feel-good experience for me to return to SQL Saturday in Orlando, the place where SQL Saturdays were started by Andy Warren ( Twitter | Blog ). On this trip, I delivered a full-day, pre-conference seminar on Troubleshooting and Performance Tuning SQL Server. I also delivered a session on SQL Server Internals and Architecture to a totally packed house. For those of you who emailed me directly, here's the link for the special SQL Sentry offer . I got to attend the extended events session...(read more)

    Read the article

  • SQL Azure Federations and Semantic Search by the SQL product team in London tonight (Monday)

    - by simonsabin
    Don’t forget that tonight we have Michael Rys from the SQL Server Product Team presenting on the Federation support coming to SQL Azure and the Semantic search coming in SQL Server Denali. This is a must attend evening for anyone that is serious about scaling SQL or doing search in SQL Server. Michael also has a few other hats including Microsoft’s representative on the W3C XML Query Working Group. To register go to http://sqlsocial20110613.eventbrite.com/   Ps Beer and Pizza will be laid on...(read more)

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Performing Inner Join for Multiple Columns in the Same Table

    - by frankiefrank
    I have a scenario which I'm a bit stuck on. Let's say I have a survey about colors, and I have one table for the color data, and another for people's answers. tbColors color_code , color_name 1 , 'blue' 2 , 'green' 3 , 'yellow' 4 , 'red' tbAnswers answer_id , favorite_color , least_favorite_color , color_im_allergic_to 1 , 1 , 2 3 2 , 3 , 1 4 3 , 1 , 1 2 4 , 2 , 3 4 For display I want to write a SELECT that presents the answers table but using the color_name column from tbColors. I understand the "most stupid" way to do it naming tbColors three times in the FROM section, using a different alias for each column to replace. How would a non-stupid way look?

    Read the article

  • Is SQL Azure a newbies springboard?

    - by jamiet
    Earlier today I was considering the various SQL Server platforms that are available today and I wondered aloud, wonder how long until the majority of #sqlserver newcomers use @sqlazure instead of installing locally Let me explain. My first experience of development was way back in the early 90s when I would crank open VBA in Access or Excel and start hammering out some code, usually by recording macros and looking at the code that they produced (sound familiar?). The reason was simple, Office was becoming ubiquitous so the barrier to entry was incredibly low and, save for a short hiatus at university, I’ve been developing on the Microsoft platform ever since. These days spend most of my time using SQL Server. I take a look at SQL Azure today I see a lot of similarities with those early experiences, the barrier to entry is low and getting lower. I don’t have to download some software or actually install anything other than a web browser in order to get myself a fully functioning SQL Server  database against which I can ostensibly start hammering out some code and I believe that to be incredibly empowering. Having said that there are still a few pretty high barriers, namely: I need to get out my credit card Its pretty useless without some development tools such as SQL Server Management Studio, which I do have to install. The second of those barriers will disappear pretty soon when Project Houston delivers a web-based admin and presentation tool for SQL Azure so that just leaves the matter of my having to use a credit card. If Microsoft have any sense at all then they will realise the huge potential of opening up a free, throttled version of SQL Azure for newbies to party on; they get to developers early (just like they did with me all those years ago) and it gives potential customers an opportunity to try-before-they-buy. Perhaps in 20 years time people will be talking about SQL Azure as being their first foray into the world of coding! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cross join problem query

    - by user66121
    i have following table structure HUB_DETAILS (Master) Branch_ID Branch_Name VTRCheckList (Master) CLid CLName VTRCheckListDetails (Detail) CLid Branch_ID VTRValue vtrRespDate Actually when i run the following query it does comes with all the Checklist names alongwith all branch names but shows the value in every branch infact only 1 branch has data in the given date criteria. it should show 0 if there is no data in checklist of the respective branch. SELECT VTRCheckList.CLName, Hub_Details.BranchName, sum(cast(VTRCheckListDetails.VtrValue as int)) as 'Total' FROM VTRCheckListDetails INNER JOIN VTRCheckList ON VTRCheckListDetails.CLid = VTRCheckList.CLid CROSS JOIN Hub_Details where Convert(date,VTRCheckListDetails.vtrRespDate, 105) >= convert(date,'01-01-2011',105) and Convert(date, VTRCheckListDetails.vtrRespDate, 105) <= convert(date,'30-01-2011',105) GROUP BY VTRCheckList.CLName, Hub_Details.BranchName

    Read the article

  • SQL Server 2008 - Keyword search using table Join

    - by Aaron Wagner
    Ok, I created a Stored Procedure that, among other things, is searching 5 columns for a particular keyword. To accomplish this, I have the keywords parameter being split out by a function and returned as a table. Then I do a Left Join on that table, using a LIKE constraint. So, I had this working beautifully, and then all of the sudden it stops working. Now it is returning every row, instead of just the rows it needs. The other caveat, is that if the keyword parameter is empty, it should ignore it. Given what's below, is there A) a glaring mistake, or B) a more efficient way to approach this? Here is what I have currently: ALTER PROCEDURE [dbo].[usp_getOppsPaged] @startRowIndex int, @maximumRows int, @city varchar(100) = NULL, @state char(2) = NULL, @zip varchar(10) = NULL, @classification varchar(15) = NULL, @startDateMin date = NULL, @startDateMax date = NULL, @endDateMin date = NULL, @endDateMax date = NULL, @keywords varchar(400) = NULL AS BEGIN SET NOCOUNT ON; ;WITH Results_CTE AS ( SELECT opportunities.*, organizations.*, departments.dept_name, departments.dept_address, departments.dept_building_name, departments.dept_suite_num, departments.dept_city, departments.dept_state, departments.dept_zip, departments.dept_international_address, departments.dept_phone, departments.dept_website, departments.dept_gen_list, ROW_NUMBER() OVER (ORDER BY opp_id) AS RowNum FROM opportunities JOIN departments ON opportunities.dept_id = departments.dept_id JOIN organizations ON departments.org_id=organizations.org_id LEFT JOIN Split(',',@keywords) AS kw ON (title LIKE '%'+kw.s+'%' OR [description] LIKE '%'+kw.s+'%' OR tasks LIKE '%'+kw.s+'%' OR requirements LIKE '%'+kw.s+'%' OR comments LIKE '%'+kw.s+'%') WHERE ( (@city IS NOT NULL AND (city LIKE '%'+@city+'%' OR dept_city LIKE '%'+@city+'%' OR org_city LIKE '%'+@city+'%')) OR (@state IS NOT NULL AND ([state] = @state OR dept_state = @state OR org_state = @state)) OR (@zip IS NOT NULL AND (zip = @zip OR dept_zip = @zip OR org_zip = @zip)) OR (@classification IS NOT NULL AND (classification LIKE '%'+@classification+'%')) OR ((@startDateMin IS NOT NULL AND @startDateMax IS NOT NULL) AND ([start_date] BETWEEN @startDateMin AND @startDateMax)) OR ((@endDateMin IS NOT NULL AND @endDateMax IS NOT NULL) AND ([end_date] BETWEEN @endDateMin AND @endDateMax)) OR ( (@city IS NULL AND @state IS NULL AND @zip IS NULL AND @classification IS NULL AND @startDateMin IS NULL AND @startDateMax IS NULL AND @endDateMin IS NULL AND @endDateMin IS NULL) ) ) ) SELECT * FROM Results_CTE WHERE RowNum >= @startRowIndex AND RowNum < @startRowIndex + @maximumRows; END

    Read the article

  • What's the importance of restoring SQL Server system databases (model, master, etc.)?

    - by Zero Subnet
    I had to restore some production databases to a different drive on the same Microsoft SQL Server 2005 machine. That worked fine and the application using the databases is back online. However, i have not restored the system (or default?) databases that SQL Server creates on its own (model, master, etc.). My question is, what is the role of these databases? and how important it is that i restore them?

    Read the article

  • can't login to new install of SQL 2008 x64 via SSMS

    - by tpcolson
    I have performed a fresh install of SQL 2008 x64 on a fresh install of Server 2008 R2 x64 in an AD environment. Upon install completion, I cannot login to the SQL Instance via SSMS, with the following error: Login failed for user domain\user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ]. Background: the server is correctly joined to the AD Domain, the install was performed with defaults, windows authentication only (per organizational rules), the SQL install completes with no errors, domain\user was added as SQL Amin during setup account provisioning, I am logged into to console as domain\user when this error occurs, windows firewall is OFF, UAC is ON (an will never be turned off in accordance with organizational policy). To troubleshoot this error I have tried: Run SSMS as administrator: fail; Start SQL in single user mode, run SSMS: fail Start SQL in single user mode, run SSMS as administrator: Success Start SQL in single user mode, run SSMS as administrator, remove domain\user from sysadmin group, re-add, run SSMS: fail; Any combination and permutation of log off and log on, reboot, and chant gregorian prayers: fail; Reimage server with 2008 x64, slipstream SP2 into SQL 2008 install, all above troubleshooting steps are repeatable exactly, so I've narrowed this down to not being a SP issue; (this is NOT 2008 SQL R2) Any suggestion on how to grant management access to this fresh install of SQL 2008 via SSMS? Our organizational policy is no console access to servers, management will be done via management tools intalled on client workstations. domain\user is a group of 8 users whom will have SSMS installed on workstations. However, we can't even access SQL via SSMS from the console! We cannot deploy this in an environment where these 8 users will have to sneak into the server closet on the weekends and have console access to SQL and run SSMS as administrator. EDIT: domain\group is a replacement for the actual object; the queries indicate that domain\group does indeed have the right privelges....!?! 1> EXEC xp_logininfo 'domain\group' go account name type privilege mapped login name permission path 'domain\group' group admin 'domain\group' NULL xp_logininfo seems to show 'domain\group' in the sql admin group; 1> SELECT A.name AS 'Role', B.name AS 'Login' 3> FROM sys.server_role_members C 5> INNER JOIN sys.server_principals A ON A.principal_id = C.role_principal_id 7> INNER JOIN sys.server_principals B ON B.principal_id = C.member_principal _id 9> go Role Login sysadmin sa sysadmin NT AUTHORITY\SYSTEM sysadmin NT SERVICE\MSSQLSERVER sysadmin NT SERVICE\SQLSERVERAGENT sysadmin domain\group 1> SELECT PRINCIPAL_ID AS [Principal ID], 2> NAME AS [User], 3> TYPE_DESC AS [Type Description], 4> IS_DISABLED AS [Status] 5> FROM sys.server_principals 6> GO Principal ID User Type Description Status ------------ ------------------------------------------------------------------- ------------------------------------------------------------- ------------------ ------------------------------------------ ------ 1 sa SQL_LOGIN 1 2 public SERVER_ROLE 0 3 sysadmin SERVER_ROLE 0 4 securityadmin SERVER_ROLE 0 5 serveradmin SERVER_ROLE 0 6 setupadmin SERVER_ROLE 0 7 processadmin SERVER_ROLE 0 8 diskadmin SERVER_ROLE 0 9 dbcreator SERVER_ROLE 0 10 bulkadmin SERVER_ROLE 0 101 ##MS_SQLResourceSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 102 ##MS_SQLReplicationSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 103 ##MS_SQLAuthenticatorCertificate## CERTIFICATE_MAPPED _LOGIN 0 105 ##MS_PolicySigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 257 ##MS_PolicyTsqlExecutionLogin## SQL_LOGIN 1 259 NT AUTHORITY\SYSTEM WINDOWS_LOGIN 0 260 NT SERVICE\MSSQLSERVER WINDOWS_GROUP 0 262 NT SERVICE\SQLSERVERAGENT WINDOWS_GROUP 0 263 ##MS_PolicyEventProcessingLogin## SQL_LOGIN 1 264 ##MS_AgentSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 265 domain\group WINDOWS_GROUP 0 (21 rows affected)

    Read the article

  • Restore Database from SQL Server 2008 to SQL Server 2005

    - by Nirmal
    I have created a set of tables (around 20) in SQL Server 2008 and entered around 1000 records to appropriate tables. But the issue is that I want that same tables with all the entered data into SQL Server 2005 (SQLEXPRESS). Obviously it won't work by taking a backup and restore it into SQL Server 2005 as it won't support backward compatibility. Any suggestion would be appreciated....

    Read the article

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005).

    Read the article

  • SQL Compact import DB from SQL Server Express with Server Management Studio

    - by Sasha
    Hi! I try to import sql script, generated with Server Management Studio, into SQL Compact 3.5 and get a lot of error. What I am doing wrong? I generate script with "Task/Generate Script" context menu. Part of my script: CREATE TABLE [LogMagazines]( [IdUser] [int] NOT NULL, [Text] [nvarchar](500) NULL, [TypeLog] [int] NOT NULL, [DateAndTime] [datetime] NOT NULL, [DetailMessage] [nvarchar](max) NULL, [Id] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_LogMagazines] PRIMARY KEY ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Knowledge Base: http://stackoverflow.com/questions/1525063/how-to-import-data-in-sql-compact-edition http://stackoverflow.com/questions/1515969/exporting-data-in-sql-server-as-insert-into/1515975

    Read the article

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

  • How to install SQL Server 2005 Configuration Manager without installing SQL Server Management Studio

    - by Arnold Zokas
    Hi, I need to configure SQL Server aliases on a public-facing production server. To do that, I need to install SQL Server Configuration Manager. I was not able to find a standalone installer for that, so I am having to install SQL Server 2005 Client Components. This approach is not ideal as we don't want to have SSMS on an public-facing production server. Is there a way to install SQL Server 2005 Configuration Manager without installing SQL Server Management Studio? Thanks, Arnold

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >