Search Results

Search found 30671 results on 1227 pages for 'database modeling'.

Page 88/1227 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • SQL Complicated Group / Join by Category

    - by Mike Silvis
    I currently have a database structure with two important tables. 1) Food Types (Fruit, Vegetables, Meat) 2) Specific Foods (Apple, Oranges, Carrots, Lettuce, Steak, Pork) I am currently trying to build a SQL statement such that I can have the following. Fruit < Apple, Orange Vegetables < Carrots, Lettuce Meat < Steak, Port I have tried using a statement like the following Select * From Food_Type join (Select * From Foods) as Foods on Food_Type.Type_ID = Foods.Type_ID but this returns every Specific Food, while I only want the first 2 per category. So I basically need my subquery to have a limit statement in it so that it finds only the first 2 per category. However if I simply do the following Select * From Food_Type join (Select * From Foods LIMIT 2) as Foods on Food_Type.Type_ID = Foods.Type_ID My statement only returns 2 results total.

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • I've created a database table using Visual Studio for my C# program. Now what?

    - by Kevin
    Hi! I'm very new to C#, so please forgive me if I've overlooked something here. I've created a database using Visual Studio (add new item service-based database) called LoadForecast.mdf. I then created a table called ForecastsDB and added some fields. My main question is this: I've created a console application with the intention of writing some data to the newly created database. I've added LoadForecast.mdf as a data source for my program, but is there anything else I should do? I saw an example where the next step was adding a "data diagram", but this was for a visual application, not a console application. Do I still need to diagram the database for my console app? I just want to be able to write new records out to my database table and wasn't sure if there were any other things I needed to do for the VS environment to be "aware" of my database. Thanks for any advise!

    Read the article

  • Working with foreign keys - cannot insert

    - by Industrial
    Hi everyone! Doing my first tryouts with foreign keys in a mySQL database and are trying to do a insert, that fails for this reason: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails Does this mean that foreign keys restrict INSERTS as well as DELETES and/or UPDATES on each table that is enforced with foreign keys relations? Thanks! Updated description: Products ---------------------------- id | type ---------------------------- 0 | 0 1 | 3 ProductsToCategories ---------------------------- productid | categoryid ---------------------------- 0 | 0 1 | 1 Product table has following structure CREATE TABLE IF NOT EXISTS `alpha`.`products` ( `id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT , `type` TINYINT(2) UNSIGNED NOT NULL DEFAULT 0 , PRIMARY KEY (`id`) , CONSTRAINT `prodsku` FOREIGN KEY (`id` ) REFERENCES `alpha`.`productsToSku` (`product` ) ON DELETE CASCADE, ON UPDATE CASCADE) ENGINE = InnoDB;

    Read the article

  • How to deploy a Java Swing application with an embedded JavaDB database?

    - by Jonas
    I have implemented an Java Swing application that uses an embedded JavaDB database. The database need to be stored somewhere and the database tables need to be created at the first run. What is the preferred way to do these procedures? Should I always create the database in the local directory, and first check if the database file exist, and if it doesn't exist let the user create the tables (or at least show a message that the tables will be created). Or should I let the user choose a path? but then I have to save the path somewhere. Should I save the path with Preferences.systemRoot();, and check if that variable is set on startup? If the user choses a path and save it in the Preferences, can I get any problems with user permissions? or should it be safe wherever the user store the database? Or how do I handle this? Any other suggestions for this procedure?

    Read the article

  • How to optimize an SQL query with many thousands of WHERE clauses

    - by bugaboo
    I have a series of queries against a very mega large database, and I have hundreds-of-thousands of ORs in WHERE clauses. What is the best and easiest way to optimize such SQL queries? I found some articles about creating temporary tables and using joins, but I am unsure. I'm new to serious SQL, and have been cutting and pasting results from one into the next. SELECT doc_id, language, author, title FROM doc_text WHERE language='fr' OR language='es' SELECT doc_id, ref_id FROM doc_ref WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR ... SELECT ref_id, location_id FROM ref_master WHERE ref_id=098765 OR ref_id=987654 OR ref_id=876543 OR OR OR ... SELECT location_id, location_display_name FROM location SELECT doc_id, index_code, FROM doc_index WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR x100,000 These unoptimized query can take over 24 hours each. Cheers.

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • How to represent a Many-To-Many relationship in XML or other simple file format?

    - by CSharperWithJava
    I have a list management appliaction that stores its data in a many-to-many relationship database. I.E. A note can be in any number of lists, and a list can have any number of notes. I also can export this data to and XML file and import it in another instance of my app for sharing lists between users. However, this is based on a legacy system where the list to note relationship was one-to-many (ideal for XML). Now a note that is in multiple lists is esentially split into two identical rows in the DB and all relation between them is lost. Question: How can I represent this many-to-many relationship in a simple, standard file format? (Preferably XML to maintain backwards compatibility)

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • database modeling for google app engine for multiple revison of entity.

    - by iamgopal
    hi, in my application ( kind of wiki clone ) - an article is frequently changing. and i need to track all changes that are done on that article. { text only. } one crude way i have done it, is to add a datetime property and create a new entity everytime something change. which is too much database wasting. { and also un-necessary index waste too. } and also need to re-create parent-child and entity relationships. i also have log which can show changes -- but i want some thing easier , so that jumping from one version to another version could be easier. ideas ? thanks.

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • How should I build a simple database package for my python application?

    - by Carson Myers
    I'm building a database library for my application using sqlite3 as the base. I want to structure it like so: db/ __init__.py users.py blah.py etc.py So I would do this in Python: import db db.users.create('username', 'password') I'm suffering analysis paralysis (oh no!) about how to handle the database connection. I don't really want to use classes in these modules, it doesn't really seem appropriate to be able to create a bunch of "users" objects that can all manipulate the same database in the same ways -- so inheriting a connection is a no-go. Should I have one global connection to the database that all the modules use, and then put this in each module: #users.py from db_stuff import connection Or should I create a new connection for each module and keep that alive? Or should I create a new connection for every transaction? How are these database connections supposed to be used? The same goes for cursor objects: Do I create a new cursor for each transaction? Create just one for each database connection?

    Read the article

  • How to find foreign-key dependencies pointing to one record in Oracle?

    - by daveslab
    Hi folks, I have a very large Oracle database, with many many tables and millions of rows. I need to delete one of them, but want to make sure that dropping it will not break any other dependent rows that point to it as a foreign key record. Is there a way to get a list of all the other records, or at least table schemas, that point to this row? I know that I could just try to delete it myself, and catch the exception, but I won't be running the script myself and need it to run clean the first time through. I have the tools SQL Developer from Oracle, and PL/SQL Developer from AllRoundAutomations at my disposal. Thanks in advance!

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • Need a field / flag / status number for mutliple use ?

    - by Jules
    I want to create a field in my database which will be easy to query. I think if I give a bit of background this will make more sense. My table has listings shown on my website. I run a program which looks at the listings a decides whether to hide them from being shown on the site. I also hide listings manually for various reasons. I want to store these reasons in a field, so more than one reason could be made for hiding. So I need some form of logic to determine which reasons have been used. Can anyone offer me any guidance on what will be future-proof aka new reasons and what will be quick and easy to query upon ?

    Read the article

  • how to design a db like Facebook where users can update their status and of the fb page as admin

    - by Harsha M V
    i am designing a database where users can update status messages of theirs and they can create pages groups like facebook fan page and post status like the admin of the page and not as a user. user(id, name..) group(id, name...) group_admin(group_id, user_id) this is my set up. Is this the way to do it. How to post under the group as an admin. will i need to make a check to every user if he is the admin or not ?

    Read the article

  • Three customer addresses in one table or in separate tables?

    - by DR
    In my application I have a Customer class and an Address class. The Customer class has three instances of the Address class: customerAddress, deliveryAddress, invoiceAddress. Whats the best way to reflect this structure in a database? The straightforward way would be a customer table and a separate address table. A more denormalized way would be just a customer table with columns for every address (Example for "street": customer_street, delivery_street, invoice_street) What are your experiences with that? Are there any advantages and disadvantages of these approaches?

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Best pattern for storing (product) attributes in SQL Server

    - by EdH
    We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access. The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications). The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row. An example of the attributes Table may look like this: ProdID AttributeName AttributeValue 123 Color Blue 123 FittingSize 1.25 123 Certification AS1111 123 Certification EE2212 123 Certification FM.3 456 Pipe 11 678 Color Red 999 Certification AE1111 ... Note: Attribute name would likely come from a lookup table or enum. So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes. If anyone has any suggestions or a better pattern for this type of data, please let me know. Thanks! -Ed

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >