Search Results

Search found 29663 results on 1187 pages for 'northwest arkansas sql se'.

Page 54/1187 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • SQL Server INSERT ... SELECT Statement won't parse

    - by Jim Barnett
    I am getting the following error message with SQL Server 2005 Msg 120, Level 15, State 1, Procedure usp_AttributeActivitiesForDateRange, Line 18 The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns. I have copy and pasted the select list and insert list into excel and verified there are the same number of items in each list. Both tables an additional primary key field with is not listed in either the insert statement or select list. I am not sure if that is relevant, but suspicious it may be. Here is the source for my stored procedure: CREATE PROCEDURE [dbo].[usp_AttributeActivitiesForDateRange] ( @dtmFrom DATETIME, @dtmTo DATETIME ) AS BEGIN SET NOCOUNT ON; DECLARE @dtmToWithTime DATETIME SET @dtmToWithTime = DATEADD(hh, 23, DATEADD(mi, 59, DATEADD(s, 59, @dtmTo))); -- Get uncontested DC activities INSERT INTO AttributedDoubleClickActivities ([Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID], [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], Ordinal, [Click-Time], [Event-ID]) SELECT [Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID] [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], REPLACE(Ordinal, '?', '') AS Ordinal, [Click-Time], [Event-ID] FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo AND REPLACE(Ordinal, '?', '') IN (SELECT REPLACE(Ordinal, '?', '') FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo EXCEPT SELECT CONVERT(VARCHAR, TripID) FROM VisualSciencesActivities WHERE [Time] BETWEEN @dtmFrom AND @dtmTo); END GO

    Read the article

  • Optimizing T-SQL where an array would be nice

    - by Polatrite
    Alright, first you'll need to grab a barf bag. I've been tasked with optimizing several old stored procedures in our database. This SP does the following: 1) cursor loops through a series of "buildings" 2) cursor loops through a week, Sunday-Saturday 3) has a huge set of IF blocks that are responsible for counting how many Objects of what Types are present in a given building Essentially what you'll see in this code block is that, if there are 5 objects of type #2, it will increment @Type_2_Objects_5 by 1. IF @Number_Type_1_Objects = 0 BEGIN SET @Type_1_Objects_0 = @Type_1_Objects_0 + 1 END IF @Number_Type_1_Objects = 1 BEGIN SET @Type_1_Objects_1 = @Type_1_Objects_1 + 1 END IF @Number_Type_1_Objects = 2 BEGIN SET @Type_1_Objects_2 = @Type_1_Objects_2 + 1 END IF @Number_Type_1_Objects = 3 BEGIN SET @Type_1_Objects_3 = @Type_1_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_1] IF @Number_Type_2_Objects = 0 BEGIN SET @Type_2_Objects_0 = @Type_2_Objects_0 + 1 END IF @Number_Type_2_Objects = 1 BEGIN SET @Type_2_Objects_1 = @Type_2_Objects_1 + 1 END IF @Number_Type_2_Objects = 2 BEGIN SET @Type_2_Objects_2 = @Type_2_Objects_2 + 1 END IF @Number_Type_2_Objects = 3 BEGIN SET @Type_2_Objects_3 = @Type_2_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_2] In addition to being extremely hacky (and limited to a quantity of 20 objects), it seems like a terrible way of handling this. In a traditional language, this could easily be solved with a 2-dimensional array... objects[type][quantity] += 1; I'm a T-SQL novice, but since writing stored procedures often uses a lot of temporary tables (which could essentially be a 2-dimensional array) I was wondering if someone could illuminate a better way of handling a situation like this with two dynamic pieces of data to store. Requested in comments: The columns are simply Number_Type_1_Objects, Number_Type_2_Objects, Number_Type_3_Objects, Number_Type_4_Objects, Number_Type_5_Objects, and CurrentDateTime. Each row in the table represents 5 minutes. The expected output is to figure out what percentage of time a given quantity of objects is present throughout each day. Sunday - Object Type 1 0 objects - 69 rows, 5:45, 34.85% 1 object - 85 rows, 7:05, 42.93% 2 objects - 44 rows, 3:40, 22.22% On Sunday, there were 0 objects of type 1 for 34.85% of the day. There was 1 object for 42.93% of the day, and 2 objects for 22.22% of the day. Repeat for each object type.

    Read the article

  • Summing Row in SQL query for time range

    - by user3703334
    I'm trying to group a large amount of data into smaller bundles. Currently the code for my query is as follows SELECT [DateTime] ,[KW] FROM [POWER] WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00' ORDER BY datetime which gives me DateTime KW 4/14/2014 6:00:02.0 1947 4/14/2014 6:00:15.0 1946 4/14/2014 6:00:23.0 1947 4/14/2014 6:00:32.0 1011 4/14/2014 6:00:43.0 601 4/14/2014 6:00:52.0 585 4/14/2014 6:01:02.0 582 4/14/2014 6:01:12.0 580 4/14/2014 6:01:21.0 579 4/14/2014 6:01:32.0 579 4/14/2014 6:01:44.0 578 4/14/2014 6:01:53.0 578 4/14/2014 6:02:01.0 577 4/14/2014 6:02:12.0 577 4/14/2014 6:02:22.0 577 4/14/2014 6:02:32.0 576 4/14/2014 6:02:42.0 578 4/14/2014 6:02:52.0 577 4/14/2014 6:03:02.0 577 4/14/2014 6:03:12.0 577 4/14/2014 6:03:22.0 578 . . . . 4/21/2014 5:59:55.0 11 Now there is a reading every 10 seconds from a substation. Now I want to group this data into hourly readings. Thus 00:00-01:00 = sum([KW]] for where datetime >= '^date^ 00:00:00' and datetime < '^date^ 01:00:00' I've tried using a convert to change the datetime into date and time field and then only to add all the time fields together with no success. Can someone please assist me, I'm not sure what is right way of doing this. Thanks ADDED Ok so the spilt between Datetime is working nicely, but as if I add a SUM([KW]) function SQL gives an error. And if I include any of the group functions it also nags. Below is what works, I still need to sum the KW per the grouping of hours. I've tried using Group By Hour and Group by DATEPART(Hour,[DateTime]) Both didn't work. SELECT DATEPART(Hour,[DateTime]) Hour ,DATEPART(Day,[DateTime]) Day ,DATEPART(Month,[DateTime]) Month ,([KVAReal]) ,([KVAr]) ,([KW]) FROM [POWER].[dbo].[IT10t_PAC3200] WHERE datetime >= '2014-04-14 06:00:00' and datetime < '2014-04-21 06:00:00' order by datetime

    Read the article

  • xml parameter in sql server stored procedure

    - by npalle
    I want to write a stored procedure that accept an XML parameter, parsing it's elements and inserting them in a table SQL. This is my XML: <Lines> <Line> <roomlist> <room> <namehotel>MeSa</namehotel> <typeroom>506671</typeroom> <typeroomname>Dbl Standard - Tip</typeroomname> <roomnumber>0</roomnumber> <priceroom>444.60</priceroom> <costroom>400.00</costroom> <boardtype/> <paxes> <pax> <name>EU</name> <lastname>CADO</lastname> <typepax>Adult</typepax> </pax> <pax> <name>LIN</name> <lastname>BAC</lastname> <typepax>Adult</typepax> </pax> </paxes> </room> </roomlist> </Line> </Lines> How can do that?

    Read the article

  • SQL to insert latest version of a group of items

    - by Garett
    I’m trying to determine a good way to handle the scenario below. I have the following two database tables, along with sample data. Table1 contains distributions that are grouped per project. A project can have one or more distributions. A distribution can have one of more accounts. An account has a percentage allocated to it. The distributions can be modified by adding or removing account, as well as changing percentages. Table2 tracks distributions, assigning a version number to each distribution. I need to be able to copy new distributions from Table1 to Table2, but only under two conditions: 1. the entire distribution does not already exist 2. the distribution has been modified (accounts added/removed or percentages changed). Note: When copying a distribution from Table1 to Table2 I need to compare all accounts and percentages within the distribution to determine if it already exists. When inserting the new distribution then I need to increment the VersionID (max(VersionID) + 1). So, in the example provided the distribution (12345, 1) has been modified, adding account number 7, as well as changing percentages allocated. The entire distribution should be copied to the second table, incrementing the VersionID to 3 in the process. The database in question is SQL Server 2005. Table1 ------ ProjectID AccountDistributionID AccountID Percent 12345 1 1 25.0 12345 1 2 25.0 12345 1 7 50.0 56789 2 3 25.0 56789 2 4 25.0 56789 2 5 25.0 56789 2 6 25.0 Table2 ------ ID VersionID Project ID AccountDistributionID AccountID Percent 1 1 12345 1 1 50.0 2 1 12345 1 2 50.0 3 2 56789 2 3 25.0 4 2 56789 2 4 25.0 5 2 56789 2 5 25.0 6 2 56789 2 6 25.0

    Read the article

  • What exactly is saved in SQL Server Statistics? When they get updated? Is SQL Server itself is taking care of them?

    - by Pritesh
    I have been working with SQL Server as a Developer a while. One thing I learnt is SQL Server manages Statistics which help Engine to create optimized execution plan. I could not figure out what exactly is stores in Statistics? (I read it saves Vector, but what Vector?) When/In which scenario SQL Server updates Statistics? How/why some time they go out of sync (old Statistics) In case of old Statistics is a manual DBA/Developer intervention is required or SQL Server Will get them updated. As a DBA/Developer how to find out if Statistics OLD? What should we do?

    Read the article

  • Evaluation of CTEs in SQL Server 2005

    - by Jammer
    I have a question about how MS SQL evaluates functions inside CTEs. A couple of searches didn't turn up any results related to this issue, but I apologize if this is common knowledge and I'm just behind the curve. It wouldn't be the first time :-) This query is a simplified (and obviously less dynamic) version of what I'm actually doing, but it does exhibit the problem I'm experiencing. It looks like this: CREATE TABLE #EmployeePool(EmployeeID int, EmployeeRank int); INSERT INTO #EmployeePool(EmployeeID, EmployeeRank) SELECT 42, 1 UNION ALL SELECT 43, 2; DECLARE @NumEmployees int; SELECT @NumEmployees = COUNT(*) FROM #EmployeePool; WITH RandomizedCustomers AS ( SELECT CAST(c.Criteria AS int) AS CustomerID, dbo.fnUtil_Random(@NumEmployees) AS RandomRank FROM dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') c) SELECT rc.CustomerID, ep.EmployeeID FROM RandomizedCustomers rc JOIN #EmployeePool ep ON ep.EmployeeRank = rc.RandomRank; DROP TABLE #EmployeePool; The following can be assumed about all executions of the above: The result of dbo.fnUtil_Random() is always an int value greater than zero and less than or equal to the argument passed in. Since it's being called above with @NumEmployees which has the value 2, this function always evaluates to 1 or 2. The result of dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') produces a one-column, one-row table that contains a sql_variant with a base type of 'int' that has the value 219935. Given the above assumptions, it makes sense (to me, anyway) that the result of the expression above should always produce a two-column table containing one record - CustomerID and an EmployeeID. The CustomerID should always be the int value 219935, and the EmployeeID should be either 42 or 43. However, this is not always the case. Sometimes I get the expected single record. Other times I get two records (one for each EmployeeID), and still others I get no records. However, if I replace the RandomizedCustomers CTE with a true temp table, the problem vanishes completely. Every time I think I have an explanation for this behavior, it turns out to not make sense or be impossible, so I literally cannot explain why this would happen. Since the problem does not happen when I replace the CTE with a temp table, I can only assume it has something to do with the functions inside CTEs are evaluated during joins to that CTE. Do any of you have any theories?

    Read the article

  • Migrate data from SQL Compact to SQL Server 2008

    - by Martin
    I need to do a one-time migration of data from SQL Server Compact Edition to SQL Server 2008 Express Edition. I'm looking for a tool to do this kind of migration. I've tried using Import and Export Data in SQL Server, but it doesn't let me import from SQL Server Compact Edition. Anyone knows of a easy way to do it?

    Read the article

  • Problem attaching mdf file in sql server 2008

    - by Fraz Sundal
    I have an mdf file of sql server 2005 database now i want it to attach in sql server 2008 R2 but when i try to attach it, it gave me error saying. Unable to open the physical file "D:\Fraz\Freelance\Database\DBmdf13aug\mbh_pk.mdf". Operating system error 5: "5(Access is denied.)". (Microsoft SQL Server, Error: 5120) what can be the problem and how to fix it? Is this folder permission error or sql server 2008 have something missing

    Read the article

  • SQL Server - VMWare install - Utilize more RAM

    - by alex
    We have a SQL server machine - It’s a VMWare image (running on ESXi hardware etc..) It has windows 2008 x64 standard The SQL install is SQL 2008 standard The virtual machine has 12gb of RAM, and 4 virtual CPU The box is suffering from near 100% CPU a lot of the time I enabled the AWE- but SQL server only seems to use 3-4gb of RAM Is there a way of making it use more available ram more effectively? cache results for example..?

    Read the article

  • SQL server deadlock between INSERT and SELECT statement

    - by dtroy
    Hi! I've got a problem with multiple deadlocks on SQL server 2005. This one is between an INSERT and a SELECT statement. There are two tables. Table 1 and Table2. Table2 has Table1's PK (table1_id) as foreign key. Index on table1_id is clustered. The INSERT inserts a single row into table2 at a time. The SELCET joins the 2 tables. (it's a long query which might take up to 12 secs to run) According to my understanding (and experiments) the INSERT should acquire an IS lock on table1 to check referential integrity (which should not cause a deadlock). But, in this case it acquired an IX page lock The deadlock report: <deadlock-list> <deadlock victim="process968898"> <process-list> <process id="process8db1f8" taskpriority="0" logused="2424" waitresource="OBJECT: 5:789577851:0 " waittime="12390" ownerId="61831512" transactionname="user_transaction" lasttranstarted="2010-04-16T07:10:13.347" XDES="0x222a8250" lockMode="IX" schedulerid="1" kpid="3764" status="suspended" spid="52" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:13.350" lastbatchcompleted="2010-04-16T07:10:13.347" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read uncommitted (1)" xactid="61831512" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcTable2_Insert" line="18" stmtstart="576" stmtend="1148" sqlhandle="0x0300050079e62d06e9307f000b9d00000100000000000000"> INSERT INTO dbo.Table2 ( f1, table1_id, f2 ) VALUES ( @p1, @p_DocumentVersionID, @p1 ) </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 103671417] </inputbuf> </process> <process id="process968898" taskpriority="0" logused="0" waitresource="PAGE: 5:1:46510" waittime="7625" ownerId="61831406" transactionname="INSERT" lasttranstarted="2010-04-16T07:10:12.717" XDES="0x418ec00" lockMode="S" schedulerid="2" kpid="1724" status="suspended" spid="53" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:12.713" lastbatchcompleted="2010-04-16T07:10:12.713" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read committed (2)" xactid="61831406" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcGetList" line="64" stmtstart="3548" stmtend="11570" sqlhandle="0x03000500dbcec17e8d267f000b9d00000100000000000000"> <!-- XXXXXXXXXXXXXX...SELECT STATEMENT WITH Multiple joins including both Table2 table 1 and .... XXXXXXXXXXXXXXX --> </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 2126630619] </inputbuf> </process> </process-list> <resource-list> <pagelock fileid="1" pageid="46510" dbid="5" objectname="DatabaseName.dbo.table1" id="lock6236bc0" mode="IX" associatedObjectId="72057594042908672"> <owner-list> <owner id="process8db1f8" mode="IX"/> </owner-list> <waiter-list> <waiter id="process968898" mode="S" requestType="wait"/> </waiter-list> </pagelock> <objectlock lockPartition="0" objid="789577851" subresource="FULL" dbid="5" objectname="DatabaseName.dbo.Table2" id="lock970a240" mode="S" associatedObjectId="789577851"> <owner-list> <owner id="process968898" mode="S"/> </owner-list> <waiter-list> <waiter id="process8db1f8" mode="IX" requestType="wait"/> </waiter-list> </objectlock> </resource-list> </deadlock> </deadlock-list> Can anyone explain why the INSERT gets the IX page lock ? Am I not reading the deadlock report properly? BTW, I have not managed to reproduce this issue. Thanks!

    Read the article

  • Different ways to query this search in SQL?

    - by Bart Terrell
    I am teaching myself MS-SQL and I am trying to find different ways to find the Count of Paid and Unpaid Claims for 2012 grouped by Region from these 3 tables. If there is a returned date, the claim is unpaid if the returned date is null then the claim is paid. I will attach the code I have ran, but I am not sure if there are better ways to do it. Thanks. Here is the code: SET dateformat ymd; CREATE TABLE Claims ( ClaimID INT, SubID INT, [Claim Date] DATETIME ); CREATE TABLE Phoneship ( ClaimID INT, [Shipping Number] INT, [Claim Date] DATETIME, [Ship Date] DATETIME, [Returned Date] DATETIME ); CREATE TABLE Enrollment ( SubID INT, Enrollment_Date DATETIME, Channel NVARCHAR(255), Region NVARCHAR(255), Status FLOAT, Drop_Date DATETIME ); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (102, 201, '2011-10-13 00:00:00', '2011-10-14 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (103, 202, '2011-11-02 00:00:00', '2011-11-03 00:00:00', '2011-11-20 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (103, 203, '2011-11-02 00:00:00', '2011-11-22 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (105, 204, '2012-01-16 00:00:00', '2012-01-17 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (106, 205, '2012-02-15 00:00:00', '2012-02-16 00:00:00', '2012-02-26 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (106, 206, '2012-02-15 00:00:00', '2012-02-27 00:00:00', '2012-03-06 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (107, 207, '2012-03-12 00:00:00', '2012-03-13 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (108, 208, '2012-05-11 00:00:00', '2012-05-12 00:00:00', NULL); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (109, 209, '2012-05-13 00:00:00', '2012-05-14 00:00:00', '2012-05-28 00:00:00'); INSERT INTO [Phoneship] ([ClaimID], [Shipping Number], [Claim Date], [Ship Date], [Returned Date]) VALUES (109, 210, '2012-05-13 00:00:00', '2012-05-30 00:00:00', NULL); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (101, 12345678, '2011-03-06 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (102, 12347190, '2011-10-13 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (103, 12348723, '2011-11-02 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (104, 12349745, '2011-11-09 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (105, 12347190, '2012-01-16 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (106, 12349234, '2012-02-15 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (107, 12350767, '2012-03-12 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (108, 12350256, '2012-05-11 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (109, 12347701, '2012-05-13 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (110, 12350256, '2012-05-15 00:00:00'); INSERT INTO [Claims] ([ClaimID], [SubID], [Claim Date]) VALUES (111, 12350767, '2012-06-30 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12345678, '2011-01-05 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12346178, '2011-03-13 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12346679, '2011-05-19 00:00:00', 'Indirect Dealers', 'Southeast', 0, '2012-03-15 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12347190, '2011-07-25 00:00:00', 'Retail', 'Northeast', 0, '2012-05-21 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12347701, '2011-08-14 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12348212, '2011-09-30 00:00:00', 'Retail', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12348723, '2011-10-20 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12349234, '2012-01-06 00:00:00', 'Indirect Dealers', 'West', 0, '2012-02-14 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12349745, '2012-01-26 00:00:00', 'Retail', 'Northeast', 0, '2012-04-15 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12350256, '2012-02-11 00:00:00', 'Retail', 'Southeast', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12350767, '2012-03-02 00:00:00', 'Indirect Dealers', 'West', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12351278, '2012-04-18 00:00:00', 'Retail', 'Midwest', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12351789, '2012-05-08 00:00:00', 'Indirect Dealers', 'West', 0, '2012-07-04 00:00:00'); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12352300, '2012-06-24 00:00:00', 'Retail', 'Midwest', 1, NULL); INSERT INTO [Enrollment] ([SubID], [Enrollment_Date], [Channel], [Region], [Status], [Drop_Date]) VALUES (12352811, '2012-06-25 00:00:00', 'Retail', 'Southeast', 1, NULL); And Query1 SELECT Count(ClaimID) AS 'Paid Claim', (SELECT Count(ClaimID) FROM dbo.phoneship WHERE [returned date] IS NOT NULL) AS 'Unpaid Claim' FROM dbo.Phoneship WHERE [Returned Date] IS NULL GROUP BY claimid Query2 SELECT Count(*) AS 'Paid Claims', (SELECT Count(*) FROM dbo.Phoneship WHERE [Returned Date] IS NOT NULL) AS 'Unpaid Claims' FROM dbo.Phoneship WHERE [Returned Date] IS NULL; Query3 Select Distinct(C.[Shipping Number]), Count(C.ClaimID) AS 'COUNT ClaimID', A.Region, A.SubID From dbo.HSEnrollment A Inner Join dbo.Claims B On A.SubId = B.SubId Inner Join dbo.Phoneship C On B.ClaimID = C.ClaimID Where C.[Returned Date] IS NULL Group By A.Region, A.Subid, C.ClaimID, C.[Shipping Number] Order By A.Region

    Read the article

  • A Batch of LinuxFest Northwest 2010 Videos at Montana Linux

    <b>Montana Linux:</b> "I recorded them with a Samsung SC-MX20 which is a very inexpensive / budget rig. The sound quality is fair to good considering the camera does not have the ability to use an external mic. The video quality is fair to good considering that most of the rooms had the lights turned off for viewing projected presentation slides."

    Read the article

  • ????????SQL Developer?Data Modeler?????????????

    - by Yusuke.Yamamoto
    ????? ??:2010/05/18 ??:?????? Oracle ?GUI?????????···??????????????????SQL Developer ? Data Modeler ???????GUI???????????????????????????SQL Developer ? Data Modeler ?????????????????? ????Oracle SQL Developer Data Modeler ??Oracle SQL Developer Data Modeler ????Oracle SQL Developer ???Oracle SQL Developer ???????? ????????? ????????????????? http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/100518_sqldeveloper_evening.pdf

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • How can I uninstall SQL Server Express in Windows Server 2008

    - by Stallman
    I installed Windows Server 2008 as the OS, but I dislike the SQL Server Express which it provide by default at all. So I changed to SQL Sever 2008 Enterprise. Here comes the problem, I don't know how to remove the SQL Server Express edition. In the Programs and Features under Control Panel, I can't find the installation of SQL Server Express which is provided by OS in default. What I can see is only the SQL Sever 2008 Enterprise edi. Any suggestion?

    Read the article

  • SQL Server 2008 R2 Enterprise database has unexpected 4GB database size limit

    - by Jesse
    I have SQL Server 2008 R2 Enterprise installed on a local Windows 7 x64 workstation. When I create a database on the server, it unexpectedly has a 4GB size limit (Database properties in SQL Server Management Studio say size = 3934.38 MB, space available = 47.13 MB). Unfortunately the database needs more than 4GB, and Enterprise is not supposed to have a practical maximum size. I confirmed the database is on the Enterprise server: SELECT @@VERSIONMicrosoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) The database file is not set to restrict growth in SQL Server Management Studio, and there is plenty of hard drive space. The database was copied from SQL Express (which has a 4GB limit), but the same occurs with a fresh database creation. I've spent a couple of hours trying to figure this out and Google-searching, to no avail. Any ideas?

    Read the article

  • SQL 2008 publisher -> SQL 2000 subscriber: Is a pull subscription possible for merge replication?

    - by Brian Dunzweiler
    I am trying to synchronize a SQL 2000 SP4 subscriber to a SQL 2008 publisher via a merge pull subscription. When the subscriber tries to run the merge agent, it fails the following error: The process could not connect to Distributor 'OH05DBS002\SAM_SSG_2008'. SQL Server does not exist or access denied. Has anyone had success with this setup? I was able to create and synchronize a push subscription so I know that communication works between the two, at least from 2008-2000. The lack of communication from 2000-2008 also affects the ability to create a linked server on the SQL 2000 subscriber. One other tidbit - I did install the SQL 2008 native client on the the 2000 box but it didn't help either. Before anyone asks, I can't upgrade the subscriber as it still needs to support replication between MS Access 2003. Yeah, I know. :) TIA, Brian

    Read the article

  • SQL Server Installer Closes Silently Without Errors

    - by ashes999
    When I run the SQL Server 2008 R2 Express installer (32-bit or 64-bit), nothing happens. It will get to the screen where it asks me to accept the terms of license and send anonymous feedback; I check off both boxes and click Next, and it automatically starts installing the Support Files. And then the window disappears. I tried looking through the log files, but didn't see any errors. I've tried: x64 SQL Server R2 express x86 SQL Server R2 express X64 SQL Server R2 (full) X64 SQL Server Management Studio Of these, only management studio installed correctly; the two express editions failed silently, and the full version gave me different errors. I also tried running in Administrator mode (despite being logged in as an administrator user), but again, no difference.

    Read the article

  • Is Query Performance different for different versions of SQL Server?

    - by Ronak Mathia
    I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It quite working well with SQL Server 2008. stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables) But when I run same scenario with SQL Server 2008 R2 then stored procedure takes more time to complete execution. its about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that. I have also tested same scenario with SQL Server 2012. but result is same as above. Please give suggestions.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Howto use Windows Authentication with SQL Server 2008 Express on a workgroup network?

    - by mbadawi23
    I have two computers running SQL Server 2008 Express: c01 and c02, I setup both for remote connection using windows authentication. Worked fine for c02 but not for c01. This is the error message I'm getting: TITLE: Connect to Server Cannot connect to ACAMP001\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18452&LinkId=20476 BUTTONS: OK I don't know if I'm missing something, here is what I did: Enabled TCP/IP protocol for client from Sql Server Configuration Manager. Modified Windows firewall exceptions for respective ports. Started the Sql Browser service as a local service Added Windows user to this group: "SQLServerMSSQLUser$c01$SQLEXPRESS" From Management Studio, I added "SQLServerMSSQLUser$c01$SQLEXPRESS" to SQLEXPRESS instance's logins under security folder, and I granted sysadmin permissions to it. Restarted c01\SQLEXPRESS Restarted Sql Browser service. There is no domain here. It's only a workgroup. Please any help is appreciated, Thank you.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >