Search Results

Search found 84007 results on 3361 pages for 'sql system table'.

Page 47/3361 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • An XEvent a Day (13 of 31) – The system_health Session

    - by Jonathan Kehayias
    Today’s post was originally planned for this coming weekend, but seems I’ve caught whatever bug my kids had over the weekend so I am changing up today’s blog post with one that is easier to cover and shorter.  If you’ve been running some of the queries from the posts in this series, you have no doubt come across an Event Session running on your server with the name of system_health.  In today’s post I’ll go over this session and provide links to references related to it. When Extended Events...(read more)

    Read the article

  • An XEvent a Day (8 of 31) – Targets Week – synchronous_event_counter

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - Bucketizers , looked at the bucketizer Targets in Extended Events and how they can be used to simplify analysis and perform more targeted analysis based on their output.  Today’s post will be fairly short, by comparison to the previous posts, while we look at the synchronous_event_counter target, which can be used to test the impact of an Event Session without actually incurring the cost of Event collection. What is the synchronous_event_counter? The synchronous_event_count...(read more)

    Read the article

  • An XEvent a Day (6 of 31) – Targets Week – asynchronous_file_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - ring_buffer , looked at the ring_buffer Target in Extended Events and how it outputs the raw Event data in an XML document.  Today I’m going to go over the details of the other Target in Extended Events that captures raw Event data, the asynchronous_file_target. What is the asynchronous_file_target? The asynchronous_file_target holds the raw format Event data in a proprietary binary file format that persists beyond server restarts and can be provided to another...(read more)

    Read the article

  • An XEvent a Day (7 of 31) – Targets Week – bucketizers

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - asynchronous_file_target , looked at the asynchronous_file_target Target in Extended Events and how it outputs the raw Event data in an XML document. Continuing with Targets week today, we’ll look at the bucketizer targets in Extended Events which can be used to group Events based on the Event data that is being returned. What is the bucketizer? The bucketizer performs grouping of Events as they are processed by the target into buckets based on the Event data and...(read more)

    Read the article

  • An XEvent a Day (9 of 31) – Targets Week – pair_matching

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week – synchronous_event_counter , looked at the counter Target in Extended Events and how it could be used to determine the number of Events a Event Session will generate without actually incurring the cost to collect and store the Events.  Today’s post is coming late, I know, but sometimes that’s just how the ball rolls.  My original planned demo’s for today’s post turned out to only work based on a fluke, though they were very consistent at working as expected,...(read more)

    Read the article

  • SQL Server Certification - a database platform primer for your career path

    - by ssqa.net
    When you need to upgrade your knowledge then training is required, at the same time certifications will help you to keep up on what you have learned! There is a big debate on the web about whether certifications are important in your career or not, the bottomline is if you do not know the stuff or unable to answer few basic technical questions, it does'nt matter how many certifications you have then you will not get the job, well I'm not starting the same discussion here. But in the recent...(read more)

    Read the article

  • SQLUniversity Professional Development Week: Learning To Fly

    - by andyleonard
    Introduction Clem and Jim Bob were out hunting the other day in the woods south of Farmville. As they crossed a ridge, they came upon a big ol' Momma Bear and her cub. The larger bear immediately started towards them. Jim Bob took off running as fast as he could. He stopped when he realized Clem wasn't with him. And when he saw Clem reaching into his pack, Jim Bob was incredulous: "Hurry Clem! That bar's comin' fast! You need to out run 'er!" Clem kicked off his boots and pulled running shoes out...(read more)

    Read the article

  • An XEvent a Day (31 of 31) – Event Session DDL Events

    - by Jonathan Kehayias
    To close out this month’s series on Extended Events we’ll look at the DDL Events for the Event Session DDL operations, and how those can be used to track changes to Event Sessions and determine all of the possible outputs that could exist from an Extended Event Session.  One of my least favorite quirks about Extended Events is that there is no way to determine the Events and Actions that may exist inside a Target, except to parse all of the the captured data.  Information about the Event...(read more)

    Read the article

  • Parsing the sqlserver.sql_text Action in Extended Events by Offsets

    - by Jonathan Kehayias
    A couple of weeks back I received an email from a member of the community who was reading the XEvent a Day blog series and had a couple of interesting questions about Extended Events.  This person had created an Event Session that captured the sqlserver.sql_statement_completed and sqlserver.sql_statement_starting Events and wanted to know how to do a correlation between the related Events so that the offset information from the starting Event could be used to find the statement of the completed...(read more)

    Read the article

  • An XEvent a Day (30 of 31) – Tracking Session and Statement Level Waits

    - by Jonathan Kehayias
    While attending PASS Summit this year, I got the opportunity to hang out with Brent Ozar ( Blog | Twitter ) one afternoon while he did some work for Yanni Robel ( Blog | Twitter ).  After looking at the wait stats information, Brent pointed out some potential problem points, and based on that information I pulled up my code for my PASS session the next day on Wait Statistics and Extended Events and made some changes to one of the demo’s so that the Event Session only focused on those potentially...(read more)

    Read the article

  • An XEvent a Day (24 of 31) – What is the package0.callstack Action?

    - by Jonathan Kehayias
    One of the actions inside of Extended Events is the package0.callstack and the only description provided by sys.dm_xe_objects for the object is 16-frame call stack. If you look back at The system_health Session blog post, you’ll notice that the package0.callstack Action has been added to a number of the Events that the PSS team thought were of significance to include in the Event Session. We can trigger an event that will by logged by our system_health Event Session by raising an error of severity...(read more)

    Read the article

  • Parsing the sqlserver.sql_text Action in Extended Events by Offsets

    - by Jonathan Kehayias
    A couple of weeks back I received an email from a member of the community who was reading the XEvent a Day blog series and had a couple of interesting questions about Extended Events.  This person had created an Event Session that captured the sqlserver.sql_statement_completed and sqlserver.sql_statement_starting Events and wanted to know how to do a correlation between the related Events so that the offset information from the starting Event could be used to find the statement of the completed...(read more)

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • Here Comes the FY11 Earmarks Database

    - by Mike C
    I'm really interested in politics (don't worry, I'm not going to start bashing politicians and hammering you with political rage). The point is when the U.S. FY11 Omnibus Spending Bill (the bill to fund the U.S. Government for another year) was announced it piqued my interest. I'm fascinated by " earmarks " (also affectionally known as " pork "). For those who aren't familiar with U.S. politics, "earmark" is a slang term for "Congressionally Directed Spending". It's basically the set of provisions...(read more)

    Read the article

  • SQL Server v.Next (Denali) : Why you should start testing early

    - by AaronBertrand
    Denali is coming, whether you like it or not. You may not be an early adopter and you may not have plans on your current calendar, but at some point you will need to move your apps and databases to this release - or one very much like it. There are a lot of great new features you will be able to take advantage of, but not everything is a double rainbow. There are some changes that will break your spirit if you let them. What does it mean? I go over several breaking changes in my presentation that...(read more)

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • CONTAINSTABLE with wildcard works different in SQL Server 2005 and SQL Server 2008?

    - by musuk
    I have two same databases one on SQL Server 2005 and one on SQL Server 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQL Server 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQL Server 2005 works perfectly and finds necessary record. SQL Server 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • SQL - Two foreign keys that have a dependency between them

    - by Brian
    The current structure is as follows: Table RowType: RowTypeID Table RowSubType: RowSubTypeID FK_RowTypeID Table ColumnDef: FK_RowTypeID FK_RowSubTypeID (nullable) In short, I'm mapping column definitions to rows. In some cases, those rows have subtype(s), which will have column definitions specific to them. Alternatively, I could hang those column definitions that are specific to subtypes off their own table, or I could combine the data in RowType and RowSubType into one table and work with a single ID, but I'm not sure either is a better solution (if anything, I'd lean towards the latter, as we mostly end up pulling ColumnDefs for a given RowType/RowSubType). Is the current design SQL Blasphemy? If I keep the current structuree, how do I maintain that if RowSubTypeID is specified in ColumnDef, that it must correspond to the RowType specified by RowTypeID? Should I try to enforce this with a trigger or am I missing a simple redesign that would solve the problem?

    Read the article

  • T-SQL QUERY PROBLEM

    - by Sam
    Hi All, I have table called Summary and the data in the table looks like this: ID Type Name Parent 1 Act Rent Null 2 Eng E21-01-Rent Rent 3 Prj P01-12-Rent E21-Rent 1 Act Fin Null 2 Eng E13-27-Fin Fin 3 Prj P56-35-Fin E13-Fin I am writing a SP which has to pull the parent based on type. Here always the type Act has ID 1, Eng has ID 2 and Prj has ID 3. The type ACT parent is always NUll, type Eng parent is Act and type Prj parent is Eng Now I have table called Detail.I am writing a SP to insert Detail Table data to the Summary table. I am passing the id as parameter: I am having problem with the parent. How do I get that? I can always say when ID is 1 then parent is Null but when ID is 2 then parent is name of ID 1 similarly when ID is 3 then parent is name of ID2. How do I get that? Can anyone help me with this:

    Read the article

  • Two Instances of Sql Server (2005 and 2008)

    - by Felipe
    Hi All, I installed Visual Studio 2008 Professional in my machine and It had installed SQL Server Express 2005 database in machine, and I use it very fine! I installed SQL Managment Studio and works great. So, in this week I Installed Visual Studio 2010 Pro in machine and the setup installed the SQL Server express 2008 and it overwrite the instance of my SQL Server Express 2005. All right, Now, I'd like to know how can I have two instances of the SQL Server Express in my Machine, Express 2005 and Express 2008. I can not access the 2005 , only 2008 :( and my projects uses 2005.. Somebody Help me! thanks Bye

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Cannot open SQL 2005 database in SQL server Management Studio 2008 R2 on Windows 7

    - by Darryl Lawrence
    I have Windows 7 64bit as my OS + SQL Server 2008 R2 installed. I can connect to SQL Databases (2008), but cannot connect to a SQL 2005 database. I can, however, connect to the 2005 SQL Database on a PC that has Windows XP as the OS and also has SQL Server 2008 R2 installed. So it seems that it works fine on XP but not on Windows 7 (32 or 64bit). is this an Operating System issue? Error message: Cannot connect to OMRSQLV016\PRODSQL002. =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 Error Number: -1 Severity: 20 State: 0

    Read the article

  • Which SQL query is faster? Filter on Join criteria or Where clause?

    - by Jon Erickson
    Compare these 2 queries. Is it faster to put the filter on the join criteria or in the were clause. I have always felt that it is faster on the join criteria because it reduces the result set at the soonest possible moment, but I don't know for sure. I'm going to build some tests to see, but I also wanted to get opinions on which would is clearer to read as well. Query 1 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID INNER JOIN TableB b ON x.TableBID = b.ID WHERE a.ID = 1 /* <-- Filter here? */ Query 2 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID AND a.ID = 1 /* <-- Or filter here? */ INNER JOIN TableB b ON x.TableBID = b.ID

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >