Search Results

Search found 952 results on 39 pages for 'relational'.

Page 12/39 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

  • What are some ways people deploy relational database changes using Node.js? [closed]

    - by JamesEggers
    I've been diving more and more into Node.js and hosting services like Heroku and Nodejitsu recently and have been trying to figure out how to best deploy database changes for postgres or mysql. There are a few migration projects under npm that I can see; however, all seem to be really buggy or just not work. I currently manage the Monarch migration project on npm, but it's currently buggy itself and my experiences developing such utilities are in other, more procedural, languages. So what do people use to deploy changes to their databases on these environments? What has worked for people? I'm looking for a better understanding of what the current situation/process looks like.

    Read the article

  • Designing a user-defined list to be stored in a relational database - Should I include user index?

    - by Zaemz
    By index, I mean, as the user creates the list, each item receives an integer index for its place in that particular list. Since there will be a table of ListItems, I'd prefer to avoid using the name "Index" for the field. Then I was thinking - should I even include the list index in the database? I figured I would because the list would be created in the same fashion every time, then. Or I could order the list for the user based on its actual primary key, since the list items are created in succession anyway... What should I do?

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • Is it possible to listen to relational database update?

    - by Morgan Cheng
    Is it possible to listen to relation database update? For example, my web app want to send data update to client through Comet technology. I can have the program to poll the database periodically, but that would not be performant and scalable. If app can hood to a "event handler" of database, then app can get notification every time given database table data is updated. This sounds more promising, but I didn't find any concrete example for it. This is listener pattern. Does common relational database support such feature?

    Read the article

  • mySQL & Relational databases: How to handle sharding/splitting on application level?

    - by Industrial
    Hi everybody, I have thought a bit about sharding tables, since partitioning cannot be done with foreign keys in a mySQL table. Maybe there's an option to switch to a different relational database that features both, but I don't see that as an option right now. So, the sharding idea seems like a pretty decent thing. But, what's a good approach to do this on a application level? I am guessing that a take-off point would be to prefix tables with a max value for the primary key in each table. Something like products_4000000 , products_8000000 and products_12000000. Then the application would have to check with a simple if-statement the size of the id (PK) that will be requested is smaller then four, eight or twelve million before doing any actual database calls. So, is this a step in the right direction or are we doing something really stupid?

    Read the article

  • What's the best way to store custom objects in relational database?

    - by user342610
    I have my objects with their properties. Objects could change their structure: properties may be added/removed/changed. Objects could be absolutely dropped. So object's metadata (description, classes, call them like you want :) )could be changed. The database should store objects schemas and instances of these objects. What's the best way to organise a relational database structure to store data mentioned above? Currently I see only two ways: 1) Store objects schemas in a few tables: schema general data,schema properties, possible properties types. Store instances in their tables: instance general data, a few tables - per each type from possible properties types table to store instance properties data. And so on. 2) store objects schemas like in p1 but store instances like XML files in one table: one table for general instance info and one table with instance XML. please, don't ask why/for what I need this. Just need to store custom objects and DB should work fast :)

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • NoSQL

    - by NoReasoning
    Last night, (Tuesday, June 28), at the KC .NET User group meeting, George Westwater gave a terrific presentation on NoSQL. The best way to define it (the best way is to see George explain it, and he says he will record his presentation and make it available through his blog – link above)  is databases  that does not use relational technology. And his point, and this is true – I have been around awhile – is that non-relational databases have been used for over 50 years in the business. He points out that Wall Street firms have been using non-relational technology ever since they started using computers. IBM still fully supports IMS, now in version 11 (12 is in beta), because these firms are still using this product and will continue to do so for a long time. Of course, like a lot of computer business technology, there are a lot of new NoSQL products available these days, simply as a reaction to the problems of scaling relational databases for internet use. As a result, it almost looks as though NoSQL is something new. And there are a lot, I mean a LOT, I mean a L-O-T , of new products out there for this technology. The best resource to cover all of these products is http://nosql-database.org/, which has a huge listing of what is available. My interest in the subject is primarily due to my interest in Windows Azure and the fact that Windows Azure storage is all non-relational, even the table storage. It is very fascinating and most of all, far cheaper than using SQL Azure for storage in the “cloud."

    Read the article

  • I am using relational division with EAV, but I need to find results in EAV that have some of the cat

    - by NewToDB
    I have two tables: CREATE TABLE EAV ( subscriber_id INT(1) NOT NULL DEFAULT '0', attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '', PRIMARY KEY (subscriber_id,attribute_id) ) INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'garment','shirt') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'garment','pants') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (3,'garment','pants') CREATE TABLE CRITERIA ( attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '' ) INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('color', 'red') INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('size', 'xl') To find all subscribers in the EAV that match my criteria, I use relational division: SELECT DISTINCT(subscriber_id) FROM EAV WHERE subscriber_id IN (SELECT E.subscriber_id FROM EAV AS E JOIN CRITERIA AS CR ON E.attribute_id = CR.attribute_id AND E.attribute_value = CR.attribute_value GROUP BY E.subscriber_id HAVING COUNT() = (SELECT COUNT() FROM CRITERIA)) This gives me an unique list of subscribers who have all the criteria. So that means I get back subscriber 1 and 2 since they are looking for the color red and size xl, and that's exactly my criteria. But what if I want to extend this so that I also get subscriber 3 since this subscriber didn't specifically say what color or size they want (ie. there is no entry for attribute 'color' or 'size' in the EAV table for subscriber 3). Given my current design, is there a way I can extend my query to include subscribers that have zero or more of the attributes defined, and if they do have the attribute defined, then it must match the criteria? Or is there a better way to design the table to aid in querying?

    Read the article

  • how does one _model_ data from relational databases in clojure ?

    - by sandeep
    I have asked this question on twitter as well the #clojure IRC channel, yet got no responses. There have been several articles about Clojure-for-Ruby-programmers, Clojure-for-lisp-programmers.. but what is the missing part is Clojure for ActiveRecord programmers . There have been articles about interacting with MongoDB, Redis, etc. - but these are key value stores at the end of the day. However, coming from a Rails background, we are used to thinking about databases in terms of inheritance - has_many, polymorphic, belongs_to, etc. The few articles about Clojure/Compojure + MySQL (ffclassic) - delve right into sql. Of course, it might be that an ORM induces impedence mismatch, but the fact remains that after thinking like ActiveRecord, it is very difficult to think any other way. I believe that relational DBs, lend themselves very well to the object-oriented paradigm because of them being , essentially, Sets. Stuff like activerecord is very well suited for modelling this data. For e.g. a blog - simply put class Post < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :post end How does one model this in Clojure - which is so strictly anti-OO ? Perhaps the question would have been better if it referred to all functional programming languages, but I am more interested from a Clojure standpoint (and Clojure examples)

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • How to Set Customer Table with Multiple Phone Numbers? - Relational Database Design

    - by user311509
    CREATE TABLE Phone ( phoneID - PK . . . ); CREATE TABLE PhoneDetail ( phoneDetailID - PK phoneID - FK points to Phone phoneTypeID ... phoneNumber ... . . . ); CREATE TABLE Customer ( customerID - PK firstName phoneID - Unique FK points to Phone . . . ); A customer can have multiple phone numbers e.g. Cell, Work, etc. phoneID in Customer table is unique and points to PhoneID in Phone table. If customer record is deleted, phoneID in Phone table should also be deleted. Do you have any concerns on my design? Is this designed properly? My problem is phoneID in Customer table is a child and if child record is deleted then i can not delete the parent (Phone) record automatically.

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • What resources will help me understand the fundamentals of Relational Database Systems.

    - by Rachel
    This are few of the fundamental database questions which has always given me trouble. I have tried using google and wiki but I somehow I miss out on understanding the functionality rather than terminology. If possible would really appreciate if someone can share more insights on this questions using some visual representative examples. What is a key? A candidate key? A primary key? An alternate key? A foreign key? What is an index and how does it help your database? What are the data types available and when to use which ones?

    Read the article

  • Any tips of how to handle hierarchical trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >