Search Results

Search found 53054 results on 2123 pages for 'sql sample database'.

Page 42/2123 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • SQL 2005 - Search stored procedures for text (Not all text is being searched)

    - by hamlin11
    The following bits of code do not seem to be searching the entire routine definition. Code block 1: select top 50 * from information_schema.routines where routine_definition like '%09/01/2008%' and specific_Name like '%NET' Code Block 2: SELECT ROUTINE_NAME, ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE '%EffectiveDate%' AND ROUTINE_TYPE='PROCEDURE' and ROUTINE_NAME like '%NET' I know for a fact that these bits of SQL work under most circumstances. The problem is this: When I run this for "EffectiveDate" which is buried at line ~800 in a few stored procedures, these stored procedures never show up in the results. It's as if "like" only searches so deep. Any tips on fixing this? I want to search the ENTIRE stored procedure for the specified text. Thanks!

    Read the article

  • How to join dynamic sql statement in variable with normal statement

    - by Oliver
    I have a quite complicated query which will by built up dynamically and is saved in a variable. As second part i have another normal query and i'd like to make an inner join between these both. To make it a little more easier here is a little example to illustrate my problem. For this little example i used the AdventureWorks database. Some query built up dynamically (Yes, i know here is nothing dynamic here, cause it's just an example.) DECLARE @query AS varchar(max) ; set @query = ' select HumanResources.Employee.EmployeeID ,HumanResources.Employee.LoginID ,HumanResources.Employee.Title ,HumanResources.EmployeeAddress.AddressID from HumanResources.Employee inner join HumanResources.EmployeeAddress on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID ;'; EXEC (@query); The normal query i have select Person.Address.AddressID ,Person.Address.City from Person.Address Maybe what i'd like to have but doesn't work select @query.* ,Addresses.City from @query as Employees inner join ( select Person.Address.AddressID ,Person.Address.City from Person.Address ) as Addresses on Employees.AddressID = Addresses.AddressID

    Read the article

  • Best pattern for storing (product) attributes in SQL Server

    - by EdH
    We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access. The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications). The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row. An example of the attributes Table may look like this: ProdID AttributeName AttributeValue 123 Color Blue 123 FittingSize 1.25 123 Certification AS1111 123 Certification EE2212 123 Certification FM.3 456 Pipe 11 678 Color Red 999 Certification AE1111 ... Note: Attribute name would likely come from a lookup table or enum. So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes. If anyone has any suggestions or a better pattern for this type of data, please let me know. Thanks! -Ed

    Read the article

  • SQL Query for generating matrix like output querying related table in SQL Server

    - by Nagesh
    I have three tables: Product ProductID ProductName 1 Cycle 2 Scooter 3 Car Customer CustomerID CustomerName 101 Ronald 102 Michelle 103 Armstrong 104 Schmidt 105 Peterson Transactions TID ProductID CustomerID TranDate Amount 10001 1 101 01-Jan-11 25000.00 10002 2 101 02-Jan-11 98547.52 10003 1 102 03-Feb-11 15000.00 10004 3 102 07-Jan-11 36571.85 10005 2 105 09-Feb-11 82658.23 10006 2 104 10-Feb-11 54000.25 10007 3 103 20-Feb-11 80115.50 10008 3 104 22-Feb-11 45000.65 I have written a query to group the transactions like this: SELECT P.ProductName AS Product, C.CustName AS Customer, SUM(T.Amount) AS Amount FROM Transactions AS T INNER JOIN Product AS P ON T.ProductID = P.ProductID INNER JOIN Customer AS C ON T.CustomerID = C.CustomerID WHERE T.TranDate BETWEEN '2011-01-01' AND '2011-03-31' GROUP BY P.ProductName, C.CustName ORDER BY P.ProductName which gives the result like this: Product Customer Amount Car Armstrong 80115.50 Car Michelle 36571.85 Car Schmidt 45000.65 Cycle Michelle 15000.00 Cycle Ronald 25000.00 Scooter Peterson 82658.23 Scooter Ronald 98547.52 Scooter Schmidt 54000.25 I need result of query in MATRIX form like this: Customer |------------ Amounts --------------- Name |Car Cycle Scooter Totals Armstrong 80115.50 0.00 0.00 80115.50 Michelle 36571.85 15000.00 0.00 51571.85 Ronald 0.00 25000.00 98547.52 123547.52 Peterson 0.00 0.00 82658.23 82658.23 Schmidt 45000.65 0.00 54000.25 99000.90 Please help me to acheive the above result in SQL Server 2005. Using mulitple views or even temporory tables is fine for me.

    Read the article

  • Problem with duplicates in a SQL Join

    - by Chris Ballance
    I have the following result set from a join of three tables, an articles table, a products table, an articles to products mapping table. I would like to have the results with duplicates removed similar to a select distinct on content id. Current result set: [ContendId] [Title] [productId] 1 article one 2 1 article one 3 1 article one 9 4 article four 1 4 article four 10 4 article four 14 5 article five 1 6 article six 8 6 article six 10 6 article six 11 6 article six 13 7 article seven 14 Desired result set: [ContendId] [Title] [productId] 1 article one * 4 article four * 5 article five * 6 article six * 7 article seven * Here is condensed example of the relevant SQL: IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'tempdb.dbo.products') AND type = (N'U')) drop table tempdb.dbo.products go CREATE TABLE tempdb.dbo.products ( productid int, productname varchar(255) ) go IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articleproducts') AND type = (N'U')) drop table tempdb.dbo.articleproducts go create table tempdb.dbo.articleproducts ( contentid int, productid int ) insert into tempdb.dbo.products values (1,'product one'), (2,'product two'), (3,'product three'), (4,'product four'), (5,'product five'), (6,'product six'), (7,'product seven'), (8,'product eigth'), (9,'product nine'), (10,'product ten'), (11,'product eleven'), (12,'product twelve'), (13,'product thirteen'), (14,'product fourteen') insert into tempdb.dbo.articles VALUES (1,'article one'), (2, 'article two'), (3, 'article three'), (4, 'article four'), (5, 'article five'), (6, 'article six'), (7, 'article seven'), (8, 'article eight'), (9, 'article nine'), (10, 'article ten') INSERT INTO tempdb.dbo.articleproducts VALUES (1,2), (1,3), (1,9), (4,1), (4,10), (4,14), (5,1), (6,8), (6,10), (6,11), (6,13), (7,14) GO select DISTINCT(a.contentid), a.title, p.productid from articles a JOIN articleproducts ap ON a.contentid = ap.contentid JOIN products p ON a.contentid = ap.contentid AND p.productid = ap.productid ORDER BY a.contentid

    Read the article

  • SQL Saturday Birmingham #328 Database Design Precon In One Week

    - by drsql
    On September 22, I will be doing my "How to Design a Relational Database" pre-conference session in Birmingham, Alabama. You can see the abstract here if you are interested, and you can sign up there too, naturally. At just $100, which includes a free ebook copy of my database design book, it is a great bargain and I totally promise it will be a little over 7 hours of talking about and designing databases, which will certainly be better than what you do on a normal work day, even a Friday....(read more)

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • Idera SQL Doctor 3.0 and MS SQL Changes

    New features worth mentioning in SQL doctor 3.0 begin with a new server dashboard that not only gives a comprehensive overview of a SQL Server instance's current health, but also several key details to help database administrators. Some of the details include recommendations on how to optimize server configuration, how to fix certain security issues, and how to get rid of performance bottlenecks. The latest version of SQL doctor also supplies users with key server information. The status of system parameters known to affect SQL Server performance, such as processes, disk partitions, cache, m...

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL (Question updated)

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Data Modeling: Logical Modeling Exercise

    - by swisscheese
    In trying to learn the art of data storage I have been trying to take in as much solid information as possible. PerformanceDBA posted some really helpful tutorials/examples in the following posts among others: is my data normalized? and Relational table naming convention. I already asked a subset question of this model here. So to make sure I understood the concepts he presented and I have seen elsewhere I wanted to take things a step or two further and see if I am grasping the concepts. Hence the purpose of this post, which hopefully others can also learn from. Everything I present is conceptual to me and for learning rather than applying it in some production system. It would be cool to get some input from PerformanceDBA also since I used his models to get started, but I appreciate all input given from anyone. As I am new to databases and especially modeling I will be the first to admit that I may not always ask the right questions, explain my thoughts clearly, or use the right verbage due to lack of expertise on the subject. So please keep that in mind and feel free to steer me in the right direction if I head off track. If there is enough interest in this I would like to take this from the logical to physical phases to show the evolution of the process and share it here on Stack. I will keep this thread for the Logical Diagram though and start new one for the additional steps. For my understanding I will be building a MySQL DB in the end to run some tests and see if what I came up with actually works. Here is the list of things that I want to capture in this conceptual model. Edit for V1.2 The purpose of this is to list Bands, their members, and the Events that they will be appearing at, as well as offer music and other merchandise for sale Members will be able to match up with friends Members can write reviews on the Bands, their music, and their events. There can only be one review per member on a given item, although they can edit their reviews and history will be maintained. BandMembers will have the chance to write a single Comment on Reviews about the Band they are associated with. Collectively as a Band only one Comment is allowed per Review. Members can then rate all Reviews and Comments but only once per given instance Members can select their favorite Bands, music, Merchandise, and Events Bands, Songs, and Events will be categorized into the type of Genre that they are and then further subcategorized into a SubGenre if necessary. It is ok for a Band or Event to fall into more then one Genre/SubGenre combination. Event date, time, and location will be posted for a given band and members can show that they will be attending the Event. An Event can be comprised of more than one Band, and multiple Events can take place at a single location on the same day Every party will be tied to at least one address and address history shall be maintained. Each party could also be tied to more then one address at a time (i.e. billing, shipping, physical) There will be stored profiles for Bands, BandMembers, and general members. So there it is, maybe a bit involved but could be a great learning tool for many hopefully as the process evolves and input is given by the community. Any input? EDIT v1.1 In response to PerformanceDBA U.3) That means no merchandise other than Band merchandise in the database. Correct ? That was my original thought but you got me thinking. Maybe the site would want to sell its own merchandise or even other merchandise from the bands. Not sure a mod to make for that. Would it require an entire rework of the Catalog section or just the identifying relationship that exists with the Band? Attempted a mod to sell both complete albums or song. Either way they would both be in electronic format only available for download. That is why I listed an Album as being comprised of Songs rather then 2 separate entities. U.5) I understand what you bring up about the circular relation with Favorite. I would like to get to this “It is either one Entity with some form of differentiation (FavoriteType) which identifies its treatment” but how to is not clear to me. What am I missing here? u.6) “Business Rules This is probably the only area you are weak in.” Thanks for the honest response. I will readdress these but I hope to clear up some confusion in my head first with the responses I have posted back to you. Q.1) Yes I would like to have Accepted, Rejected, and Blocked. I am not sure what you are referring to as to how this would change the logical model? Q.2) A person does not have to be a User. They can exist only as a BandMember. Is that what you are asking? Minor Issue Zero, One, or More…Oops I admit I forgot to give this attention when building the model. I am submitting this version as is and will address in a future version. I need to read up more on Constraint Checking to make sure I am understanding things. M.4) Depends if you envision OrderPurchase in the future. Can you expand as to what you mean here? EDIT V1.2 In response to PerformanceDBA input... Lessons learned. I was mixing the concept of Identifying / Non-Identifying and Cardinality (i.e. Genre / SubGenre), and doing so inconsistently to make things worse. Associative Tables are not required in Logical Diagrams as their many-to-many relationships can be depicted and then expanded in the Physical Model. I was overlooking the Cardinality in a lot of the relationships The importance of reading through relationships using effective Verb Phrases to reassure I am modeling what I want to accomplish. U.2) In the concept of this model it is only required to track a Venue as a location for an Event. No further data needs to be collected. With that being said Events will take place on a given EventDate and will be hosted at a Venue. Venues will host multiple events and possibly multiple events on a given date. In my new model my thinking was that EventDate is already tied to Event . Therefore, Venue will not need a relationship with EventDate. The 5th and 6th bullets you have listed under U.2) leave me questioning my thinking though. Am I missing something here? U.3) Is it time to move the link between Item and Band up to Item and Party instead? With the current design I don't see a possibility to sell merchandise not tied to the band as you have brought up. U.5) I left as per your input rather than making it a discrete Supertype/Subtype Relationship as I don’t see a benefit of having that type of roll up. Additional Revisions AR.1) After going through the exercise for FavoriteItem, I feel that Item to Review requires a many-to-many relationship so that is indicated. Necessary? Ok here we go for v1.3 I took a few days on this version, going back and forth with my design. Once the logical process is complete, as I want to see if I am on the right track, I will go through in depth what I had learned and the troubles I faced as a beginner going through this process. The big point for this version was it took throwing in some Keys to help see what I was missing in the past. Going through the process of doing a matrix proved to be of great help also. Regardless of anything, if it wasn't for the input given by PerformanceDBA I would still be a lost soul wondering in the dark. Who knows my current design might reaffirm that I still am, but I have learned a lot so I am know I at least have a flashlight in my hand. At this point in time I admit that I am still confused about identifying and non-identifying relationships. In my model I had to use non-identifying relationships with non nulls just to join the relationships I wanted to model. In reading a lot on the subject there seems to be a lot of disagreement and indecisiveness on the subject so I did what I thought represented the right things in my model. When to force (identifying) and when to be free (non-identifying)? Anyone have inputs? EDIT V1.4 Ok took the V1.3 inputs and cleaned things up for this V1.4 Currently working on a V1.5 to include attributes.

    Read the article

  • Database Snapshots of Mirrored databases affect performance of Principal database?

    - by yrushka
    I have 2 servers set in Mirroring High-safety. One is Principal and another in Mirror. Currently I have 2 snapshots of a Production database (100 GB size) created on Principal server (for no_lock purpose of massive select processes) and 2 snapshots on the mirror server for the same database for reporting purposes. I know snapshots reduce performance of source databases but I am not sure if snapshots from mirror server have any impact on principal server's performance. thanks,

    Read the article

  • DB2 upgrade database Fails due to descriptor corruption

    - by Jdcc
    Hi Everyone, I've recently upgraded my DB2 9.5 to DB2 9.7, and I am unable to update my database to v9.7. The error I have been receiving is this: "SQL0901N The SQL statement failed because of a non-severe system error. Subsequent SQL statements can be processed. (Reason "Packed descriptor corruption found. Please run RUNSTATS on this table".) SQLSTATE=58004" I have tried to connect with 9.5 clients that worked before the "upgrade" on other machines, but they complain about the DB not being migrated to the current version. So my DB is now somewhere in limbo between 9.5 and 9.7. Would anyone have any clever ideas on how to execute runstats on this DB without being able to connect to it? I'm not concerned so much about the data inside, as I am about the settings (name,optimization stuff, etc) Please let me know if I there is any information I left out. Thanks, Jdcc

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • SQL Server 2008 Express Edition Connection Problem

    - by waqasahmed
    I have VS 2008 Professional Edition. After the installation (which included SQL Server 2008), I decided to install SQL Server 2008 Express Edition with Advanced Tools (so I could get SQL Server Management Studio on it). So I uninstalled the SQL Express that came with VS 2008, and installed the standalone SQL Server Express 2008 version with advanced tools. However, When I try to logon onto SQL Server Management Studio using: .\SQLEXPRESS as Server name and Windows Authentication as the authentication, I get the following message: TITLE: Connect to Server ------------------------------ Cannot connect to .\SQLEXPRESS. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http//go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Any suggestions on how to get it to work? I have tried disabling Windows Firewall as well and still no luck. I am using WIndows Vista and SQL Server 2008 Express SP1 Patch has also been applied recently.

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • Database Schema Usage

    - by CrazyHorse
    I have a question regarding the appropriate use of SQL Server database schemas and was hoping that some database gurus might be able to offer some guidance around best practice. Just to give a bit of background, my team has recently shrunk to 2 people and we have just been merged with another 6 person team. My team had set up a SQL Server environment running off a desktop backing up to another desktop (and nightly to the network), whilst the new team has a formal SQL Server environment, running on a dedicated server, with backups and maintenance all handled by a dedicated team. So far it's good news for my team. Now to the query. My team designed all our tables to belong to a 3-letter schema name (e.g. User = USR, General = GEN, Account = ACC) which broadly speaking relate to specific applications, although there is a lot of overlap. My new team has come from an Access background and have implemented their tables within dbo with a 3-letter perfix followed by "_tbl" so the examples above would be dbo.USR_tblTableName, dbo.GEN_tblTableName and dbo.ACC_tblTableName. Further to this, neither my old team nor my new team has gone live with their SQL Servers yet (we're both coincidentally migrating away from Access environments) and the new team have said they're willing to consider adopting our approach if we can explain how this would be beneficial. We are not anticipating handling table updates at schema level, as we will be using application-level logins. Also, with regards to the unwieldiness of the 7-character prefix, I'm not overly concerned myself as we're using LINQ almost exclusively so the tables can simply be renamed in the DMBL (although I know that presents some challenges when we update the DBML). So therefore, given that both teams need to be aligned with one another, can anyone offer any convincing arguments either way?

    Read the article

  • sql 2008 metadata modified date

    - by Kumar
    Is there a way to identify the timestamp when an object(table/view/stored proc...) was modified ? there's a refdate in sysobjects but it's always the same as crdate atleast in my case and i know that alter view/alter table/alter proc commands have been run many times post creation

    Read the article

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • Use SQL to filter the results of a stored procedure

    - by Ben McCormack
    I've looked at other questions on Stack Overflow related to this question, but none of them seemed to answer this question clearly. We have a system Stored Procedure called sp_who2 which returns a result set of information for all running processes on the server. I want to filter the data returned by the stored procedure; conceptually, I might do it like so: SELECT * FROM sp_who2 WHERE login='bmccormack' That method, though, doesn't work. What are good practices for achieving the goal of querying the returned data of a stored procedure, preferably without having to look of the code of the original stored procedure and modify it.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >