Search Results

Search found 21 results on 1 pages for '3nf'.

Page 1/1 | 1 

  • Are these tables respect the 3NF Database Normalization?

    - by penas
    AUTHOR table Author_ID, PK First_Name Last_Name TITLES table TITLE_ID, PK NAME Author_ID, FK DOMAIN table DOMAIN_ID, PK NAME TITLE_ID, FK READERS table READER_ID, PK First_Name Last_Name ADDRESS CITY_ID, FK PHONE CITY table CITY_ID, PK NAME BORROWING table BORROWING_ID,pk READER_ID, fk TITLE_ID, fk DATE HISTORY table READER_ID TITLE_ID DATE_OF_BORROWING DATE_OF_RETURNING Are these tables respect the 3NF Database Normalization? What if 2 authors work together for the same title? The column Addresss should have it's own table? When a reader borrows a book, I make an entry in BORROWING table. After he returns the book, I delete that entry and I make another one entry in HISTORY table. Is this a good idea? Do I brake any rule? Should I have instead one single BORROWING table with a DATE_OF_RETURNING column?

    Read the article

  • Simple Database normalization question...

    - by user365531
    Hi all, I have a quick question regarding a database that I am designing and making sure it is normalized... I have a customer table, with a primary key of customerId. It has a StatusCode column that has a code which reflects the customers account status ie. 1 = Open, 2 = Closed, 3 = Suspended etc... Now I would like to have another field in the customer table that flags whether the account is allowed to be suspended or not... certain customers will be automatically suspended if they break there trading terms... others not... so the relevant table fields will be as so: Customers (CustomerId(PK):StatusCode:IsSuspensionAllowed) Now both fields are dependent on the primary key as you can not determine the status or whether suspensions are allowed on a particular customer unless you know the specific customer, except of course when the IsSuspensionAllowed field is set to YES, the the customer should never have a StatusCode of 3 (Suspended). It seems from the above table design it is possible for this to happen unless a check contraint is added to my table. I can't see how another table could be added to the relational design to enforce this though as it's only in the case where IsSuspensionAllowed is set to YES and StatusCode is set to 3 when the two have a dependence on each other. So after my long winded explanation my question is this: Is this a normalization problem and I'm not seeing a relational design that will enforce this... or is it actually just a business rule that should be enforced with a check contraint and the table is in fact still normalized. Cheers, Steve

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • database----database normalization

    - by runeveryday
    someone told me the following table isn't fit for the second database normalization. but i don't know why? i am a newbie of database design, i have read some tutorials of the 3NF. but to the 2NF and 3NF, i can't understand them well. expect someone can explain it for me. thank you, +------------+-----------+-------------------+ pk pk row +------------+-----------+-------------------+ A B C +------------+-----------+-------------------+ A D C +------------+-----------+-------------------+ A E C +------------+-----------+-------------------+

    Read the article

  • Fokedvenc BI és DW blogjaim 7: Oracle Data Warehousing

    - by Fekete Zoltán
    A következo tartalmas blogot ajánlom a nyájas olvasó figyelmébe: The Data Warehouse Insider: http://blogs.oracle.com/datawarehousing/ Az adattárház általános fogalmaitól és a bevezetések és tervezés "best practice" legjobb gyakorlati tapasztalatokig. Témák: csillagsémák, particionálás, OLAP, 3NF, párhuzamos feldolgozások, adatbetöltés, ETL-ELT, adatmodellek, rendezvények, Exadata, Database Machine, tömörítés, adatbányászat, ügyféltörténetek,...

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    Hello, I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Software development, basics of design, conventions and scalability

    - by goce ribeski
    I need to improve my programming skills in order to achieve better scalability for the software I'm working on. Purpose is to learn the rules of adding new modules and features, so when it comes to maintaining existing ones there is some concept. So, I'm looking for a good book, tutorial or websites where I can continue to read about this. Currently, what I know and what I do is: to design relational database(3NF), make separate class for each table put that in MVC implement modular programming ...write code and hope for the best... I presume that next things I need to learn more deeply are: programming codex(naming, commenting, conventions...), organize functions building interfaces organizing custom made libraries, organizing API that I'm using, documenting, team work... ... At last what my job is, it does't need to affect your answer, PHP CodeIgniter developer.

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • ORACLE PARTNER ARCHITECTS TRAINING

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Join the “Oracle Partner Architects Training”. It is aimed at providing Partner experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Oracle technology beyond the obvious. There are over 40 live and recorded online training sessions, covering many aspects of systems architecture, including these in the domain of BI, data-warehousing and integration: The Oracle Information Management Reference Architecture 3nf Or Data Vault: Mixed Emotions Or A Clear Choice? The Ins And Outs Of Data Integration Introduction To BI-Applications Customizations In BI-Applications Endeca: Analyzing Unstructured Information Download the full schedule in your language (Dutch, English, French, German, Italian, Spanish).

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Is this data set in third normal form?

    - by user2980802
    UNF: (Customer-name, Customer-id, Customer-address, (Unit-price, Order-id, Quantity, Product-id, Delivery-date) (Supplier-name, Supplier-id, Supplier-Address) 3NF: CUSTOMER (Customer-id, Order-id, Customer-name, Customer-address) ORDER (Order-id, Customer-id) ORDER/PRODUCT (Order-id, Quantity, Product-id) PRODUCT (Order-id, Product-id, Delivery-date, Supplier-id, Unit-price,) SUPPLIER (Supplier-name, Supplier-id, Supplier-Address, Product-id) Basically, the UNF is the un-normalised form. The information should have EXACTLY five tables, it's a hint we were given. The tables listed are the definite table names. We were told to make assumptions based on this information: Customer Invoice is generated from customer orders (Order & Order/Product entities) Supplier Order is generated for products that are low in stock (Product entity) Assumptions A customer can place many orders but an order is placed by only one customer An order can be for many products and a product can be ordered many times A product is supplied by only one supplier, a supplier may supply many products This is one of my modules in university and my lecturer is all but useful, I'm really struggling so any help is really appreciated.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Which Table Should be Master and Child in Database Design

    - by Jason
    I am quickly learning the ins and outs of database design (something that, as of a week ago, was new to me), but I am running across some questions that don't seem immediately obvious, so I was hoping to get some clarification. The question I have right is about foreign keys. As part of my design, I have a Company table. Originally, I had included address information directly within the table, but, as I was hoping to achieve 3NF, I broke out the address information into its own table, Address. In order to maintain data integrity, I created a row in Company called "addressId" as an INT and the Address table has a corresponding addressId as its primary key. What I'm a little bit confused about (or what I want to make sure I'm doing correctly) is determining which table should be the master (referenced) table and which should be the child (referencing) table. When I originally set this up, I made the Address table the master and the Company the child. However, I now believe this is wrong due to the fact that there should be only one address per Company and, if a Company row is deleted, I would want the corresponding Address to be removed as well (CASCADE deletion). I may be approaching this completely wrong, so I would appreciate any good rules of thumb on how to best think about the relationship between tables when using foreign keys. Thanks!

    Read the article

  • Java-Hibernate: How can I translate these tables to hibernate annotations?

    - by penas
    I need to create a simple application using these tables: http://stackoverflow.com/questions/2612848/are-these-tables-respect-the-3nf-database-normalization I have created the application using simple old JDBC, but I would like to see how the application would look like using Hibernate, but I don't know how to put the sql code in java. I have found LOTS of examples, but I'm pretty much confused about using Hibernate and I don't know If I made such a good joob. For example, for the first three tables: AUTHOR table * Author_ID, PK * First_Name * Last_Name TITLES table * TITLE_ID, PK * NAME * Author_ID, FK DOMAIN table * DOMAIN_ID, PK * NAME * TITLE_ID, FK The code in java: Table 1 @Entity @Table(name = "AUTHORS", schema = "LIBRARY") public class Author{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Author_ID") private int authorId; @Column(name = "First_Name", nullable = false, length = 50) private String firstName; @Column(name = "Last_Name", nullable = false, length = 40) private String lastName; @OneToMany @JoinColumn(name = "Title_ID") private List<Title> titles; Table 2 @Entity @Table(name = "TITLES") public class Title{ @Id @Column(name = "Title_ID") private int titleID; @Column(name = "Name", nullable = false, length = 50) private String name; @ManyToOne @JoinColumn(name = "Domain_ID") private Domain domains; Table 3 @Entity @Table(name = "DOMAINS") public class Domain{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "Domain_ID") private int Domain_ID; @Column(name = "Name", nullable = false, length = 50) private String name; @OneToOne(mappedBy = "domains") private Title title; } Any good? :)

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

1