Search Results

Search found 30270 results on 1211 pages for 'database diagramming'.

Page 74/1211 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Database Design Composite Keys

    - by guazz
    I am going to use a contrived example: one headquarter has one-or-many contacts. A contact can only belong to one headquarter. TableName = Headquarter Column 0 = Id : Guid [PK] Column 1 = Name : nvarchar(100) Column 2 = IsAnotherAttribute: bool TableName = ContactInformation Column 0 = Id : Guid [PK] Column 1 = HeadquarterId: Guid [FK] Column 2 = AddressLine1 COlumn 3 = AddressLine2 Column 4 = AddressLine3 I would like some help setting the table primary keys and foreign keys here? How does the above look? Should I use a composite key for ContactInformation on [Column 0 and Column1]? Is it ok to use surrogate key all of the time?

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • String Field Sizes for unicode database fields using different data access components

    - by Serg
    mjustin in his question 1 and question 2 says that TWideStringField.Size property for UTF8 fields in Delphi 2009 dbExpress is 4 times larger than the logical field size (max number of characters in the field). I inclined to consider this a dbExpress bug. That is what Delphi 2009 Help says: The interpretation of Size depends on the data type. The meaning of Size for data types that use it is given in the following table. For all other data types, Size is not used and its value is always 0. ftString - Size is the maximum number of characters in the string. I am using FibPlus 6.9.9 and it follows the above documentation - the string field size is the maximum number of characters, not bytes. So the question also implies the following question: Are DbExpress drivers in Delphi 2009 unusable for unicode databases?

    Read the article

  • close fails on database connections (managed connection cleanup fails) in websphere 7 but not in web

    - by mete
    I have a simple method (used in a web application through servlets) that gets a connection from a JNDI name and issues a select statement (get connection, issue select, return result, close the connection etc. in finally). Due to other methods in the application the connection is set as autocommit=false. This method works normally in websphere 6.1 as well as in glassfish and weblogic. However, in websphere 7, it receives cleanup failed error when I close the connection because, it says, the connection is still in a transaction. Because I was not updating anything I did not commit or rollback the connection in this method (which can be wrong). If I add commit before closing the connection, it works. My question is why it works in websphere 6.1 (and other containers) and why not in websphere 7 ? What can be the cause of this difference ?

    Read the article

  • Does a version control database storage engine exist?

    - by Zak
    I was just wondering if a storage engine type existed that allowed you to do version control on row level contents. For instance, if I have a simple table with ID, name, value, and ID is the PK, I could see that row 354 started as (354, "zak", "test")v1 then was updated to be (354, "zak", "this is version 2 of the value")v2 , and could see a change history on the row with something like select history (value) where ID = 354. It's kind of an esoteric thing, but it would beat having to keep writing these separate history tables and functions every time a change is made...

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \server, and it's listening on port 1234. When \client tries to connect to \server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \server:1234\tables, I get error 6097, bad IP address specified in the connection path. \server is pingable from \client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron.

    Read the article

  • mysql partitioning

    - by Yang
    just want to verify that database partition is implemented only at the database level, when we query a partitioned table, we still do our normal query, nothing special with our queries, the optimization is performed automatically when parsing the query, is that correct? e.g. we have a table called 'address' with a column called 'country_code' and 'city'. so if i want to get all the addresses in New York, US, normally i wound do something like this: select * from address where country_code = 'US' and city = 'New York' if now the table is partitioned by 'country_code', and i know that now the query will only be executed on the partition which contains country_code = US. My question is do I need to explicitly specify the partition to query in my sql statement? or i still use the previous statement and the db server will optimize it automatically? Thanks in advance!

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

  • Tag, comment, rating, etc. database design

    - by Efe Kaptan
    I want to implement modules such as comment, rating, tag, etc. to my entities. What I thought was: comments_table - comment_id, comment_text entity1 - entitity1_id, entity1_text entity2 - entitity2_id, entity2_text entity1_comments - entity1_id, comment_id entity2_comments - entity2_id, comment_id .... Is this approach correct?

    Read the article

  • Facebook database design?

    - by Marin
    I have always wondered how Facebook designed the friend <- user relation. I figure the user table is something like this: user_email PK user_id PK password I figure the table with user's data (sex, age etc connected via user email I would assume). How does it connect all the friends to this user? Something like this? user_id friend_id_1 friend_id_2 friend_id_3 friend_id_N Probably not. Because the number of users is unknown and will expand.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Web interface for Firebird database

    - by Serg
    I am currently have a Firebird server on Windows and Windows desktop client applications for Firebird databases. I want to make a web interface for the existing databases (using Apache on Windows). What free instrumentation (server languages, libraries) is currently available for the purpose?

    Read the article

  • Database: storing data from user registration form

    - by teggy
    Let's say I have an user registration form. In this form, I have the option for the user to upload a photo. I have an User table and Photo table. My User table has a "PathToPhoto" column. My question is how do I fill in the "PathToPhoto" column if the photo is uploaded and inserted into Photo table before the user is created? Another way to phrase my question is how to get the newly uploaded photo to be associated to the user that may or may not be created next. I'm using python and postgresql.

    Read the article

  • Database Modeling - Either/Or in Many-to-Many

    - by EkoostikMartin
    I have an either/or type of situation in a many-to-many relationship I'm trying to model. So I have these tables: Message ---- *MessageID MessageText Employee ---- *EmployeeID EmployeeName Team ---- *TeamID TeamName MessageTarget ---- MessageID EmployeeID (nullable) TeamID (nullable) So, a Message can have either a list of Employees, or a list of Teams as a MessageTarget. Is the MessageTarget table I have above the best way to implement this relationship? What constraints can I place on the MessageTarget effectively? How should I create a primary key on MessageTarget table?

    Read the article

  • add Constraint on database with trigger

    - by Am1rr3zA
    Hi, I have 3 tables (Student, Course, student_course_choose(have field grade)) I defined a view on these 3 tables that get me an Average of the each student. I want to have constraint(with trigger) on these view(or on the table that need it) to limit the average of each student between 13 and 18. I somewhere read that I must use foreach statement(instead of foreach row) on trigger because when I decrease some grade of special student and his/her average become less than 13 they don't give me error (because later I increase grade of another his/her course ). how must I wrote this Trigger? (I want to implement aprh for testing trigger) note:I can write it in SQL server, oracle or Mysql no diff for me.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • I can't create a view in oracle database using sqlplus (insufficient privileges)

    - by Nubkadiya
    I'm running this SQL: CREATE VIEW showMembersInfo(MemberID,Fname,Lname,Address,DOB,Telephone,NIC,Email,WorkplaceID,WorkName,WorkAddress,WorkTelephone,StartingDate,ExpiryDate,Amount,WitnessID,WitName,WitAddress,WitNIC,WitEmail,WitTelephone) AS SELECT mem.MemberID,mem.FirstName,mem.LastName,mem.Address,mem.DOB,mem.Telephone,mem.NIC,mem.Email, wrk.WorkPlaceID,wrk.Name,wrk.Address,wrk.Telephone, anl.StartingDate,anl.ExpiryDate,anl.Amount, wit.WitnessID,wit.Name,wit.Address,wit.NIC,wit.Email,wit.Telephone FROM Member mem, WorkPlace wrk, AnnualFees anl, Witness wit WHERE mem.MemberID = anl.MemberID AND mem.WorkPlaceID = work.WorkPlaceID AND mem.WitnessID = wit.WitnessID When I try to create the view I get this error: ERROR at line 1: ORA-01031: insufficient privileges Why is that? I'm logged in to sqlplus using sysman

    Read the article

  • HTML5 Database Transactions

    - by jiewmeng
    i am wondering abt the example W3C Offline Web Apps the example function renderNotes() { db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { for(var i = 0; i < rs.rows.length; i++) { renderNote(rs.rows[i]); } }); }); } has the create table before the 'main' executeSql(). will it be better if i do something like $(function() { // create table 1st db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); }); // when i execute say to select/modify data, i just do the actual action db.transaction(function(tx) { tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { ... } }); db.transaction(function(tx) { tx.executeSql(‘INSERT ...’, [], function(tx, rs) { ... } }); }) i was thinking i don't need to keep repeating the CREATE IF NOT EXISTS right?

    Read the article

  • Oracle Database introduction and literature

    - by Marco Nätlitz
    Hi folks, got an new assignment covering Oracle databases. My problem now is that I am completely new to the Oracle system and never worked with it before. I need to develop a concept covering the installation and configuration of the server. Afterwards I need to migrate the old server to the new while ensuring date consistence. I just wanted to ask if you guys have some useful links for introduction and of course good literature / books on this topic? Thanks and cheers from Cologne Marco

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >