Search Results

Search found 544 results on 22 pages for 'clustered'.

Page 15/22 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • List of all index & index columns in SQL Server DB

    - by Anton Gogolev
    How do I get a list of all index & index columns in SQL Server 2005+? The closest I could get is: select s.name, t.name, i.name, c.name from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id inner join sys.indexes i on i.object_id = t.object_id inner join sys.index_columns ic on ic.object_id = t.object_id inner join sys.columns c on c.object_id = t.object_id and ic.column_id = c.column_id where i.index_id > 0 and i.type in (1, 2) -- clustered & nonclustered only and i.is_primary_key = 0 -- do not include PK indexes and i.is_unique_constraint = 0 -- do not include UQ and i.is_disabled = 0 and i.is_hypothetical = 0 and ic.key_ordinal > 0 order by ic.key_ordinal which is not exactly what I want. What I want is to list all user-defined indexes (which means no indexes which support unique constraints & primary keys) with all columns (ordered by how do they apper in index definition) plus as much metadata as possible.

    Read the article

  • EJB3.1 Remote invocation - is it distributed automatically? is it expensive?

    - by Hank
    I'm building a JEE6 application with performance and scalability in the forefront of my mind. Business logic and JPA2-facade is held in stateless session beans (EJB3.1). As of right now, the SLSBs implement only @Remote-interfaces. When a bean needs to access another bean, it does so via RMI. My reasoning behind this is the assumption that, once the application runs on a bunch of clustered application servers, the RMI-part allows the execution to be distributed across the whole cluster automagically. Is that a correct assumption? I'm fine with dealing with the downsides of that (objects lose entityManager session, pass-by-value), at least I think so. But I am wondering if constant remote invocation isn't adding more load then necessary.

    Read the article

  • SQL Server indexed view matching of views with joins not working

    - by usr
    Does anyone have experience of when SQL Servr 2008 R2 is able to automatically match indexed view (also known as materialized views) that contain joins to a query? for example the view select dbo.Orders.Date, dbo.OrderDetails.ProductID from dbo.OrderDetails join dbo.Orders on dbo.OrderDetails.OrderID = dbo.Orders.ID cannot be automatically matched to the same exact query. When I select directly from this view ith (noexpand) I actually get a much faster query plan that does a scan on the clustered index of the indexed view. Can I get SQL Server to do this matching automatically? I have quite a few queries and views... I am on enterprise edition of SQL Server 2008 R2.

    Read the article

  • Is there a way to rewrite the SQL query efficiently

    - by user320587
    hi, I have two tables with following definition TableA TableB ID1 ID2 ID3 Value1 Value ID1 Value1 C1 P1 S1 S1 C1 P1 S2 S2 C1 P1 S3 S3 C1 P1 S5 S4 S5 The values are just examples in the table. TableA has a clustered primary key ID1, ID2 & ID3 and TableB has p.k. ID1 I need to create a table that has the missing records in TableA based on TableB The select query I am trying to create should give the following output C1 P1 S4 To do this, I have the following SQL query SELECT DISTINCT TableA.ID1, TableA.ID2, TableB.ID1 FROM TableA a, TableB b WHERE TableB.ID1 NOT IN ( SELECT DISTINCT [ID3] FROM TableA aa WHERE a.ID1 == aa.ID1 AND a.ID2 == aa.ID2 ) Though this query works, it performs poorly and my final TableA may have upto 1M records. is there a way to rewrite this more efficiently. Thanks for any help, Javid

    Read the article

  • Why is doing a top(1) on an indexed column in SQL Server slow?

    - by reinier
    I'm puzzled by the following. I have a DB with around 10 million rows, and (among other indices) on 1 column (campaignid_int) is an index. Now I have 700k rows where the campaignid is indeed 3835 For all these rows, the connectionid is the same. I just want to find out this connectionid. use messaging_db; SELECT TOP (1) connectionid FROM outgoing_messages WITH (NOLOCK) WHERE (campaignid_int = 3835) Now this query takes approx 30 seconds to perform! I (with my small db knowledge) would expect that it would take any of the rows, and return me that connectionid If I test this same query for a campaign which only has 1 entry, it goes really fast. So the index works. How would I tackle this and why does this not work? edit: estimated execution plan: select (0%) - top (0%) - clustered index scan (100%)

    Read the article

  • Random number within a range based on a normal distribution

    - by ConfusedAgain
    I'm math challenged today :( I want to generate random numbers with a range (n to m, eg 100 to 150), but instead of purely random I want the results to be based on the normal distribution. By this I mean that in general I want the numbers "clustered" around 125. I've found this random number package that seems to have a lot of what I need: http://beta.codeproject.com/KB/recipes/Random.aspx It supports a variety of random generators (include mersiene twister) and can apply the generator to a distribution. But I'm confused... if I use a normal distribution generator the random numbers are from roughly -6 to +8 (apparently the true range is float.min to float.max). How do a scale that to my required range?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • Error creating a table : "There is already an object named ... in the database", but not object with

    - by DavRob60
    Hi, I'm trying to create a table on a Microsoft SQL Server 2005 (Express). When i run this query USE [QSWeb] GO /****** Object: Table [dbo].[QSW_RFQ_Log] Script Date: 03/26/2010 08:30:29 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[QSW_RFQ_Log]( [RFQ_ID] [int] NOT NULL, [Action_Time] [datetime] NOT NULL, [Quote_ID] [int] NULL, [UserName] [nvarchar](256) NOT NULL, [Action] [int] NOT NULL, [Parameter] [int] NULL, [Note] [varchar](255) NULL, CONSTRAINT [QSW_RFQ_Log] PRIMARY KEY CLUSTERED ( [RFQ_ID] ASC, [Action_Time] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO I got this error message Msg 2714, Level 16, State 4, Line 2 There is already an object named 'QSW_RFQ_Log' in the database. Msg 1750, Level 16, State 0, Line 2 Could not create constraint. See previous errors. but if i try to find the object in question using this query: SELECT * FROM QSWEB.sys.all_objects WHERE upper(name) like upper('QSW_RFQ_%') I got this (0 row(s) affected) What is going on????

    Read the article

  • Do I need to enable DRS to use Dynacache in Websphere Application Server Cluster

    - by rabs
    We are running a websphere commerce application with several websphere application servers configured in a cluster. We are using dynacache, so each server in the cluster will have its own cached objects in its own JVM. We are using CACHEIVL with database triggers for all cache invalidations. I was reading http://www.ibm.com/developerworks/websphere/library/techarticles/0603_crick/0603_crick.html and found an interesting sentence: "Furthermore, cache replication is necessary to ensure that invalidation messages are shared between the servers in a cluster." After thinking about this it would make sense that for the invalidation to work it would need to be triggered on all the servers in the cluster, but I couldn't find confirmation of this in the mountains of IBM doco. Does anyone know if you can use trigger based cache invalidation (through CACHEIVL) when you have several application servers clustered each with their own cache without DRS turned on? or do I need to use DRS for this to work?

    Read the article

  • Int PK inner join Vs Guid PK inner Join on SQL Server. Execution plan.

    - by bigb
    I just did some testing for Int PK join Vs Guid PK. Tables structure and number of records looking like that: Performance of CRUD operations using EF4 are pretty similar in both cases. As we know Int PK has better performance rather than strings. So SQL server execution plan with INNER JOINS are pretty different Here is an execution plan. As i understand according with execution plan from attached image Int join has better performance because it is taking less resources for Clustered index scan and it is go in two ways, am i right? May be some one may explain this execution plan in more details?

    Read the article

  • MS SQL Server 2000 tables

    - by klork
    We currently have an MS SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: 1) Does anyone have any experience with a MS SQL Server (2000/2005) installation with millions of tables? 2) What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. 3) What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Is the time cost constant when bulk inserting data into an indexed table?

    - by SiLent SoNG
    I have created an archive table which will store data for selecting only. Daily there will be a program to transfer a batch of records into the archive table. There are several columns which are indexed; while others are not. I am concerned with time cost per batch insertion: - 1st batch insertion: N1 - 2nd batch insertion: N2 - 3rd batch insertion: N3 The question is: will N1, N2, and N3 roughly be the same, or N3 N2 N1? That is, will the time cost be a constant or incremental, with existence of several indexes? All indexes are non-clustered. The archive table structure is this: create table document ( doc_id int unsigned primary key, owner_id int, -- indexed title smalltext, country char(2), year year(4), time datetime, key ix_owner(owner_id) }

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Error: Too Many Arguments Specified when Inserting Values from ASP.NET to SQL Server

    - by SidC
    Good Afternoon All, I have a wizard control that contains 20 textboxes for part numbers and another 20 for quantities. I want the part numbers and quantities loaded into the following table: USE [Diel_inventory] GO /****** Object: Table [dbo].[QUOTEDETAILPARTS] Script Date: 05/09/2010 16:26:54 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[QUOTEDETAILPARTS]( [QuoteDetailPartID] [int] IDENTITY(1,1) NOT NULL, [QuoteDetailID] [int] NOT NULL, [PartNumber] [float] NULL, [Quantity] [int] NULL, CONSTRAINT [pkQuoteDetailPartID] PRIMARY KEY CLUSTERED ( [QuoteDetailPartID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[QUOTEDETAILPARTS] WITH CHECK ADD CONSTRAINT [fkQuoteDetailID] FOREIGN KEY([QuoteDetailID]) REFERENCES [dbo].[QUOTEDETAIL] ([ID]) ON UPDATE CASCADE ON DELETE CASCADE GO Here's the snippet from my sproc for this insert: set @ID=scope_identity() Insert into dbo.QuoteDetailParts (QuoteDetailPartID, QuoteDetailID, PartNumber, Quantity) values (@ID, @QuoteDetailPartID, @PartNumber, @Quantity) When I run the ASPX page, I receive an error that there are too many arguments specified for my stored procedure. I understand why I'm getting the error, given the above table layout. However, I need help in structuring my insert syntax to look for values in all 20 PartNumber and Quantity field pairs. Thanks, Sid

    Read the article

  • Will creating index help in this case

    - by The King
    I'm still a learning user of SQL-SERVER2005. Here is my table structure CREATE TABLE [dbo].[Trn_PostingGroups]( [ControlGroup] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [PracticeCode] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ScanDate] [smalldatetime] NULL, [DepositDate] [smalldatetime] NULL, [NameOfFile] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [DepositValue] [decimal](11, 2) NULL, [RecordStatus] [char](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Trn_PostingGroups_1] PRIMARY KEY CLUSTERED ( [ControlGroup] ASC, [PracticeCode] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] Scenario 1 : Suppose I have a query like this... Select * from Trn_PostingGroups where PracticeCode = 'ABC' Will indexing on Practice Code seperately help me in making my query faster?? Scenario 2 : Select * from Trn_PostingGroups where ControlGroup = 12701 and PracticeCode = 'ABC' and NameOfFile = 'FileName1' Will indexing on NameOfFile seperately help me in making my query faster ??

    Read the article

  • FreeText Query is slow - includes TOP and Order By

    - by Eric P
    The Product table has 700K records in it. The query: SELECT TOP 1 ID, Name FROM Product WHERE contains(Name, '"White Dress"') ORDER BY DateMadeNew desc takes about 1 minute to run. There is an non-clustered index on DateMadeNew and FreeText index on Name. If I remove TOP 1 or Order By - it takes less then 1 second to run. Here is the link to execution plan. http://screencast.com/t/ZDczMzg5N Looks like FullTextMatch has over 400K executions. Why is this happening? How can it be made faster?

    Read the article

  • Auto Generated - CREATE Table Script in SQL 2008 throws error

    - by jack
    I scripted the tables in my dev database using SQL 2008 - generate scripts option (Datbase-right click-Tasks-Generate Scripts) and ran it on the staging database, but the script throws the below error for each table Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '('. Msg 319, Level 15, State 1, Line 15 Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. Below is the script for one of my table ALTER TABLE [dbo].[Customer]( [CustomerID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](500) NULL, [LastName] [nvarchar](500) NULL, [DateOfBirth] [datetime] NULL, [EmailID] [nvarchar](200) NULL, [ContactForOffers] [bit] NULL, CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ( [CustomerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO Any help will be much appreciated.

    Read the article

  • Oracle: TABLE ACCESS FULL with Primary key?

    - by tim
    There is a table: CREATE TABLE temp ( IDR decimal(9) NOT NULL, IDS decimal(9) NOT NULL, DT date NOT NULL, VAL decimal(10) NOT NULL, AFFID decimal(9), CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT) ) ; SQL>explain plan for select * from temp; Explained. SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)| | 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)| --------------------------------------------------------------- Note ----- - 'PLAN_TABLE' is old version 11 rows selected. SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

    Read the article

  • Where's the rest of the space used in this table?

    - by Eric H.
    I'm using SQL Server 2005. I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width). There is only one index, clustered. The fill factor is 0. After inserting a ton of data, sp_spaceused returns the following name rows reserved data index_size unused OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row. Where's the rest of the space? Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?

    Read the article

  • Query Execution Plan - When is the Where clause executed?

    - by Alex
    I have a query like this (created by LINQ): SELECT [t0].[Id], [t0].[CreationDate], [t0].[CreatorId] FROM [dbo].[DataFTS]('test', 100) AS [t0] WHERE [t0].[CreatorId] = 1 ORDER BY [t0].[RANK] DataFTS is a full-text search table valued function. The query execution plan looks like this: SELECT (0%) - Sort (23%) - Nested Loops (Inner Join) (1%) - Sort (Top N Sort) (25%) - Stream Aggregate (0%) - Stream Aggregate (0%) - Compute Scalar (0%) - Table Valued Function (FullTextMatch) (13%) | | - Clustered Index Seek (38%) Does this mean that the WHERE clause ([CreatorId] = 1) is executed prior to the TVF ( full text search) or after the full text search? Thank you.

    Read the article

  • Association in Entity Framework 4

    - by Marsharks
    I have two tables, a problem table and a problem history table. As you can expect, a problem can have many histories associated with it. CREATE TABLE [dbo].[Problem]( [Last_Update] [datetime] NULL, [Problem_Id] [int] NOT NULL, [Incident_Count] [int] NULL ) ALTER TABLE [dbo].[Problem] ADD CONSTRAINT [PK_Problem] PRIMARY KEY CLUSTERED ( [Problem_Id] ASC ) CREATE TABLE [dbo].[Problem_History]( [Last_Update] [datetime] NULL, [Problem_Id] [int] NOT NULL, [Severity_Chg_Flag] [char](1) NULL ) ALTER TABLE [dbo].[Problem_History] ADD [Create_DateTime] [datetime] NOT NULL ALTER TABLE [dbo].[Problem_History] WITH CHECK ADD CONSTRAINT [FK_Problem_History_Problem] FOREIGN KEY([Problem_Id]) REFERENCES [dbo].[Problem] ([Problem_Id]) The problem is when I drag this into an Entity Model, the associations are not included. Any ideas? I would like to point out that the problem history table has no separate key of its own, it shares the problem id

    Read the article

  • How do stream rows from a MSSQL table into a .NET app, in 10000 row chunks?

    - by Gravitas
    I have a table with 300 million rows in Microsoft SQL Server 2008 R2. There is a clustered index on the date column [DataDate] which means that the entire table is ordered by the date column. How do I stream out the data from this table, into my .NET application, in 10000 row chunks? Environment: Using C#. Have to be able to pause the data stream at any point, to allow the client to process the rows. Unfortunately, cannot use a select * from as this will select the entire table (its 50GB - it won't fit into memory).

    Read the article

  • T-SQL Self Join in combination with aggregate function

    - by Nick
    Hi, i have the following table. CREATE TABLE [dbo].[Tree]( [AutoID] [int] IDENTITY(1,1) NOT NULL, [Category] [varchar](10) NULL, [Condition] [varchar](10) NULL, [Description] [varchar](50) NULL, CONSTRAINT [PK_Tree] PRIMARY KEY CLUSTERED ( [AutoID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO the data looks like this: INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('1','Alpha','Type 1') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('1','Alpha','Type 1') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('2','Alpha','Type 2') INSERT INTO [Test].[dbo].[Tree] ([Category] ,[Condition] ,[Description]) VALUES ('2','Alpha','Type 2') go I try now to do the following: SELECT Category,COUNT(*) as CategoryCount FROM Tree where Condition = 'Alpha' group by Category but i wish also to get the Description for each Element. I tried several subqueries, self joins etc. i always come to the problem that the subquery cannot return more than one record. The problem is caused by a poor database design which i cannot change and i run out of ideas to get this done in a single query ;-(

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >