Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 73/222 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • noSQL/SQL/RoR: Trying to build scalable ratings table for the game

    - by alexeypro
    I am trying to solve complex thing (as it looks to me). I have next entities: PLAYER (few of them, with names like "John", "Peter", etc.). Each has unique ID. For simplicity let's think it's their name. GAME (few of them, say named "Hide and Seek", "Jump and Run", etc.). Same - each has unique ID. For simplicity of the case let it be it's name for now. SCORE (it's numeric). So, how it works. Each PLAYER can play in multiple GAMES. He gets some SCORE in every GAME. I need to build rating table -- and not one! Table #1: most played GAMES Table #2: best PLAYERS in all games (say the total SCORE in every GAME). Table #3: best PLAYERS per GAME (by SCORE in particularly that GAME). I could be build something straight right away, but that will not work. I will have more than 10,000 players; and 15 games, which will grow for sure. Score can be as low as 0, and as high as 1,000,000 (not sure if higher is possible at this moment) for player in the game. So I really need some relative data. Any suggestions? I am planning to do it with SQL, but may be just using it for key-value storage; anything -- any ideas are welcome. Thank you!

    Read the article

  • MySQL stuck on "using filesort" when doing an "order by"

    - by noko
    I can't seem to get my query to stop using filesort. This is my query: SELECT s.`pilot`, p.`name`, s.`sector`, s.`hull` FROM `pilots` p LEFT JOIN `ships` s ON ( (s.`game` = p.`game`) AND (s.`pilot` = p.`id`) ) WHERE p.`game` = 1 AND p.`id` <> 2 AND s.`sector` = 43 AND s.`hull` > 0 ORDER BY p.`last_move` DESC Table structures: CREATE TABLE IF NOT EXISTS `pilots` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `last_move` int(10) NOT NULL DEFAULT '0', UNIQUE KEY `id` (`id`), KEY `last_move` (`last_move`), KEY `game_id_lastmove` (`game`,`id`,`last_move`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; CREATE TABLE IF NOT EXISTS `ships` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `pilot` mediumint(5) unsigned NOT NULL DEFAULT '0', `sector` smallint(5) unsigned NOT NULL DEFAULT '0', `hull` smallint(4) unsigned NOT NULL DEFAULT '50', UNIQUE KEY `id` (`id`), KEY `game` (`game`), KEY `pilot` (`pilot`), KEY `sector` (`sector`), KEY `hull` (`hull`), KEY `game_2` (`game`,`pilot`,`sector`,`hull`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; The explain: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE p ref id,game_id_lastmove game_id_lastmove 1 const 7 Using where; Using filesort 1 SIMPLE s ref game,pilot,sector... game_2 6 const,fightclub_alpha.p.id,const 1 Using where; Using index edit: I cut some of the unnecessary pieces out of my queries/table structure. Anybody have any ideas?

    Read the article

  • SQLiteDataAdapter Fill exception C# ADO.NET

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here?

    Read the article

  • what is called KEY

    - by Bharanikumar
    CREATE TABLE `ost_staff` ( `staff_id` int(11) unsigned NOT NULL auto_increment, `group_id` int(10) unsigned NOT NULL default '0', `dept_id` int(10) unsigned NOT NULL default '0', `username` varchar(32) collate latin1_german2_ci NOT NULL default '', `firstname` varchar(32) collate latin1_german2_ci default NULL, `lastname` varchar(32) collate latin1_german2_ci default NULL, `passwd` varchar(128) collate latin1_german2_ci default NULL, `email` varchar(128) collate latin1_german2_ci default NULL, `phone` varchar(24) collate latin1_german2_ci NOT NULL default '', `phone_ext` varchar(6) collate latin1_german2_ci default NULL, `mobile` varchar(24) collate latin1_german2_ci NOT NULL default '', `signature` varchar(255) collate latin1_german2_ci NOT NULL default '', `isactive` tinyint(1) NOT NULL default '1', `isadmin` tinyint(1) NOT NULL default '0', `isvisible` tinyint(1) unsigned NOT NULL default '1', `onvacation` tinyint(1) unsigned NOT NULL default '0', `daylight_saving` tinyint(1) unsigned NOT NULL default '0', `append_signature` tinyint(1) unsigned NOT NULL default '0', `change_passwd` tinyint(1) unsigned NOT NULL default '0', `timezone_offset` float(3,1) NOT NULL default '0.0', `max_page_size` int(11) NOT NULL default '0', `created` datetime NOT NULL default '0000-00-00 00:00:00', `lastlogin` datetime default NULL, `updated` datetime NOT NULL default '0000-00-00 00:00:00', PRIMARY KEY (`staff_id`), UNIQUE KEY `username` (`username`), KEY `dept_id` (`dept_id`), **KEY `issuperuser` (`isadmin`),** **KEY `group_id` (`group_id`,`staff_id`)** ) ENGINE=MyISAM AUTO_INCREMENT=35 DEFAULT CHARSET=latin1 COLLATE=latin1_german2_ci; Hi the above query is the osticket open source one, i know primary key , foreign key , unique but AM NOT SURE WHAT IS THIS KEY group_id (group_id,staff_id) Please tell me, this constraints name....

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • How to Create Deterministic Guids

    - by desigeek
    In our application we are creating Xml files with an attribute that has a Guid value. This value needed to be consistent between file upgrades. So even if everything else in the file changes, the guid value for the attribute should remain the same. One obvious solution was to create a static dictionary with the filename and the Guids to be used for them. Then whenever we generate the file, we look up the dictionary for the filename and use the corresponding guid. But this is not feasible coz we might scale to 100's of files and didnt want to maintain big list of guids. So another approach was to make the Guid the same based on the path of the file. Since our file paths and application directory structure are unique, the Guid should be unique for that path. So each time we run an upgrade, the file gets the same guid based on its path. I found one cool way to generate such 'Deterministic Guids' (Thanks Elton Stoneman). It basically does this: private Guid GetDeterministicGuid(string input) { //use MD5 hash to get a 16-byte hash of the string: MD5CryptoServiceProvider provider = new MD5CryptoServiceProvider(); byte[] inputBytes = Encoding.Default.GetBytes(input); byte[] hashBytes = provider.ComputeHash(inputBytes); //generate a guid from the hash: Guid hashGuid = new Guid(hashBytes); return hashGuid; } So given a string, the Guid will always be the same. Are there any other approaches or recommended ways to doing this? What are the pros or cons of that method?

    Read the article

  • Generic Dictionary and generating a hashcode for multi-part key

    - by Andrew
    I have an object that has a multi-part key and I am struggling to find a suitable way override GetHashCode. An example of what the class looks like is. public class wibble{ public int keypart1 {get; set;} public int keypart2 {get; set;} public int keypart3 {get; set;} public int keypart4 {get; set;} public int keypart5 {get; set;} public int keypart6 {get; set;} public int keypart7 {get; set;} public single value {get; set;} } Note in just about every instance of the class no more than 2 or 3 of the keyparts would have a value greater than 0. Any ideas on how best to generate a unique hashcode in this situation? I have also been playing around with creating a key that is not unique, but spreads the objects evenly between the dictionaries buckets and then storing objects with matched hashes in a List< or LinkedList< or SortedList<. Any thoughts on this?

    Read the article

  • How to select form input based on label inner HTML?

    - by Shane
    I have multiple forms that are dynamically created with different input names and id's. The only thing unique they will have is the inner HTML of the label. Is it possible to select the input via the label inner HTML with jQuery? Here is an example of one of my patient date of birth blocks, there are many and all unique except for innerHTML. <div class="iphorm-element-spacer iphorm-element-spacer-text iphorm_1_8-element-spacer"> <label for="iphorm_081a9e2e6b9c83d70496906bb4671904150cf4b43c0cb1_8">events=Object { mouseover=[1], mouseout=[1]}handle=function()data=Object { InFieldLabels={...}} Patient DOB <span class="iphorm-required">*</span> </label> <div class="iphorm-input-wrap iphorm-input-wrap-text iphorm_1_8-input-wrap"> <input id="iphorm_081a9e2e6b9c83d70496906bb4671904150cf4b43c0cb1_8" class="iphorm-element-text iphorm_1_8" type="text" value="" name="iphorm_1_8">events=Object { focus=[1], blur=[1], keydown=[1], more...}handle=function() </div> <div class="iphorm-errors-wrap iphorm-hidden"> </div> This is in a Wordpress Plugin and because we are building to allow employees to edit their sites (this is actually a Wordpress Network), we do not want to alter the plugin if possible. Note that the label "for" and the input "id" share the same dynamic key, so this might be a way to maybe get the id, but wanted to see if there is a shorter way of doing this.

    Read the article

  • Secure Password Storage and Transfer

    - by Andras Zoltan
    I'm developing a new user store for my organisation and am now tackling password storage. The concepts of salting, HMAC etc are all fine with me - and want to store the users' passwords either salted and hashed, HMAC hashed, or HMAC salted and hashed - not sure what the best way will be - but in theory it won't matter as it will be able to change over time if required. I want to have an XML & JSON service that can act as a Security Token Service for client-side apps. I've already developed one for another system, which requires that the client double-encrypts a clear-text password using SHA1 first and then HMACSHA1 using a 128 unique key (or nonce) supplied by the server for that session only. I'd like to repeat this technique for the new system - upgrading the algo to SHA256 (chosen since implementations are readily available for all aforementioned platforms - and it's much stronger than SHA1) - but there is a problem. If I'm storing the password as a salted hash in the user-store, the client will need to be sent that salt in order to construct the correct hash before being HMACd with the unique session key. This would completely go against the point of using a salt in the first place. Equally, if I don't use salt for password storage, but instead use HMAC, it's still the same problem. At the moment, the only solution I can see is to use naked SHA256 hashing for the password in the user store, so that I can then use this as a starting point on both the server and the client for a more secure salted/hmacd password transfer for the web service. This still leaves the user store vulnerable to a dictionary attack were it ever to be accessed; and however unlikely that might be - assuming it will never happen simply doesn't sit well with me. Greatly appreciate any input.

    Read the article

  • mysql database normalization question

    - by Chocho
    here is my 3 tables: table 1 -- stores user information and it has unique data table 2 -- stores place category such as, toronto, ny, london, etc hence this is is also unique table 3 -- has duplicate information. it stores all the places a user have been. the 3 tables are linked or joined by these ids: table 1 has an "table1_id" table 2 has an "table2_id" and "place_name" table 3 has an "table3_id", "table1_id", "place_name" i have an interface where an admin sees all users. beside a user is "edit" button. clicking on that edit button allows you to edit a specific user in a form fields which has a multiple drop down box for "places". if an admin edits a user and add 1 "places" for the user, i insert that information using php. if the admin decides to deselect that 1 "places" do i delete it or mark it as on and off? how about if the admin decides to select 2 "places" for the user; change the first "places" and add an additional "places". will my table just keep growing and will i have just redundant information? thanks.

    Read the article

  • HTML onmouseover doesn't work to show up text

    - by REALFREE
    I'm trying to show some information on a picture when onmouseover/onmouseout event occurs. What I want to achieve is something like this website does on top selling. My code is like this: <div class="container" onmouseover="$('#info').css('display','block');" onmouseout="$('#info').css('display','none');"> <img src=..."> <div id="info" style="display:none"> ... some text ... </div> </div> So div info block is initially hidden, but when mouse is on a picture I want the information to appear on corresponding picture (with tint background on the image to see text well). But somewhat it doesn't work. I appreciate any suggestion how to approach this problem. Edit: more precisely, the reason why I choose to use inline because I need to show/hide text corresponding to the image(contain unique div id)that user put their mouse on/out. That means I have many div container and each container has unique div id.

    Read the article

  • MySQL & PHP - select/option lists and showing data to users that still allows me to generate queries

    - by Andrew Heath
    Sorry for the unclear title, an example will clear things up: TABLE: Scenario_victories ID scenid timestamp userid side playdate 1 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 2 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 3 RtBr001 2010-03-15 17:13:51 7 2 2010-03-10 ID and timestamp are auto-insertions by the database when the other 4 fields are added. The first thing to note is that a user can record multiple playings of the same scenario (scenid) on the same date (playdate) possibly with the same outcome (side = winner). Hence the need for the unique ID and timestamps for good measure. Now, on their user page, I'm displaying their recorded play history in a <select><option>... list form with 2 buttons at the end - Delete Record and Go to Scenario My script takes the scenid and after hitting a few other tables returns with something more user-friendly like: (playdate) (from scenid) (from side) ######################################################### # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Soviet Union won # ######################################################### [Delete Record] [Go To Scenario] in HTML: <select name="history" size=3> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Soviet Union won</option> </select> Now, if you were to highlight the first record and click Go to Scenario there is enough information there for me to parse it and produce the exact scenario you want to see. However, if you were to select Delete Record there is not - I have the playdate and I can parse the scenid and side from what's listed, but in this example all three records would have the same result. I appear to have painted myself into a corner. Does anyone have a suggestion as to how I can get some unique identifying data (ID and/or timestamp) to ride along on this form without showing it to the user? PHP-only please, I must be NoScript compliant!

    Read the article

  • SQL Joining Two or More from Table B with Common Data in Table A

    - by Matthew Frederick
    The real-world situation is a series of events that each have two or more participants (like sports teams, though there can be more than two in an event), only one of which is the host of the event. There is an Event db table for each unique event and a Participant db table with unique participants. They are joined together using a Matchup table. They look like this: Event EventID (PK) (other event data like the date, etc.) Participant ParticipantID (PK) Name Matchup EventID (FK to Event table) ParicipantID (FK to Participant) Host (1 or 0, only 1 host = 1 per EventID) What I'd like to get as a result is something like this: EventID ParticipantID where host = 1 Participant.Name where host = 1 ParticipantID where host = 0 Participant.Name where host = 0 ParticipantID where host = 0 Participant.Name where host = 0 ... Where one event has 2 participants and another has 3 participants, for example, the third participant column data would be null or otherwise noticeable, something like (PID = ParticipantID): EventID PID-1(host) Name-1 (host) PID-2 Name-2 PID-3 Name-3 ------- ----------- ------------- ----- ------ ----- ------ 1 7 Lions 8 Tigers 12 Bears 2 11 Dogs 9 Cats NULL NULL I suspect the answer is reasonably straightforward but for some reason I'm not wrapping my head around it. Alternately it's very difficult. :) I'm using MYSQL 5 if that affects the available SQL.

    Read the article

  • Efficient cron job utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Django form linking 2 models by many to many field.

    - by Ed
    I have two models: class Actor(models.Model): name = models.CharField(max_length=30, unique = True) event = models.ManyToManyField(Event, blank=True, null=True) class Event(models.Model): name = models.CharField(max_length=30, unique = True) long_description = models.TextField(blank=True, null=True) I want to create a form that allows me to identify the link between the two models when I add a new entry. This works: class ActorForm(forms.ModelForm): class Meta: model = Actor The form includes both name and event, allowing me to create a new Actor and simultaneous link it to an existing Event. On the flipside, class EventForm(forms.ModelForm): class Meta: model = Event This form does not include an actor association. So I am only able to create a new Event. I can't simultaneously link it to an existing Actor. I tried to create an inline formset: EventFormSet = forms.models.inlineformset_factory(Event, Actor, can_delete = False, extra = 2, form = ActorForm) but I get an error <'class ctg.dtb.models.Actor'> has no ForeignKey to <'class ctg.dtb.models.Event'> This isn't too surprising. The inlineformset worked for another set of models I had, but this is a different example. I think I'm going about it entirely wrong. Overall question: How can I create a form that allows me to create a new Event and link it to an existing Actor?

    Read the article

  • Map large integer to a phrase

    - by Alexander Gladysh
    I have a large and "unique" integer (actually a SHA1 hash). I want (for no other reason than to have fun) to find an algorithm to convert that SHA1 hash to a (pseudo-)English phrase. The conversion should be reversible (i.e., knowing the algorithm, one must be able to convert the phrase back to SHA1 hash.) The possible usage of the generated phrase: the human readable version of Git commit ID, like a motto for a given program version (which is built from that commit). (As I said, this is "for fun". I don't claim that this is very practical — or be much more readable than the SHA1 itself.) A better algorithm would produce shorter, more natural-looking, more unique phrases. The phrase need not make sense. I would even settle for a whole paragraph of nonsense. (Though quality — englishness — of a paragraph should probably be better than for a mere phrase.) A variation: it is OK if I will be able to work only with a part of hash. Say, first six digits is OK. Possible approach: In the past I've attempted to build a probability table (of words), and generate phrases as Markov chains, seeding the generator (picking branches from probability tree), according to the bits I read from the SHA. This was not very successful, the resulting phrases were too long and ugly. I'm not sure if this was a bug, or the general flaw in the algorithm, since I had to abandon it early enough. Now I'm thinking about attempting to solve the problem once again. Any advice on how to approach this? Do you think Markov chain approach can work here? Something else?

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • What does it mean that "Lisp can be written in itself?"

    - by Mason Wheeler
    Paul Graham wrote that "The unusual thing about Lisp-- in fact, the defining quality of Lisp-- is that it can be written in itself." But that doesn't seem the least bit unusual or definitive to me. ISTM that a programming language is defined by two things: Its compiler or interpreter, which defines the syntax and the semantics for the language by fiat, and its standard library, which defines to a large degree the idioms and techniques that skilled users will use when writing code in the language. With a few specific exceptions, (the non-C# members of the .NET family, for example,) most languages' standard libraries are written in that language for two very good reasons: because it will share the same set of syntactical definitions, function calling conventions, and the general "look and feel" of the language, and because the people who are likely to write a standard library for a programming language are its users, and particularly its designer(s). So there's nothing unique there; that's pretty standard. And again, there's nothing unique or unusual about a language's compiler being written in itself. C compilers are written in C. Pascal compilers are written in Pascal. Mono's C# compiler is written in C#. Heck, even some scripting languages have implementations "written in itself". So what does it mean that Lisp is unusual in being written in itself?

    Read the article

  • C#/Java: Proper Implementation of CompareTo when Equals tests reference identity

    - by Paul A Jungwirth
    I believe this question applies equally well to C# as to Java, because both require that {c,C}ompareTo be consistent with {e,E}quals: Suppose I want my equals() method to be the same as a reference check, i.e.: public bool equals(Object o) { return this == o; } In that case, how do I implement compareTo(Object o) (or its generic equivalent)? Part of it is easy, but I'm not sure about the other part: public int compareTo(Object o) { if (! (o instanceof MyClass)) return false; MyClass other = (MyClass)o; if (this == other) { return 0; } else { int c = foo.CompareTo(other.foo) if (c == 0) { // what here? } else { return c; } } } I can't just blindly return 1 or -1, because the solution should adhere to the normal requirements of compareTo. I can check all the instance fields, but if they are all equal, I'd still like compareTo to return a value other than 0. It should be true that a.compareTo(b) == -(b.compareTo(a)), and the ordering should stay consistent as long as the objects' state doesn't change. I don't care about ordering across invocations of the virtual machine, however. This makes me think that I could use something like memory address, if I could get at it. Then again, maybe that won't work, because the Garbage Collector could decide to move my objects around. hashCode is another idea, but I'd like something that will be always unique, not just mostly unique. Any ideas?

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • Efficient algorithm to find first available name

    - by Avrahamshuk
    I have an array containing names of items. I want to give the user the option to create items without specifying their name, so my program will have to supply a unique default name, like "Item 1". The challenge is that the name has to be unique so i have to check all the array for that default name, and if there is an item with the same name i have to change the name to be "Item 2" and so on until i find an available name. The obvious solution will be something like that: String name; for (int i = 0 , name = "Item " + i ; !isAvailable(name) ; i++); My problem with that algorithm is that it runs at O(N^2). I wonder if there is a known (or new) more efficient algorithm to solve this case. In other words my question is this: Is there any algorithm that finds the first greater-than-zero number that dose NOT exist in a given array, and runs at less that N^2? Thanks!

    Read the article

  • How to Set Customer Table with Multiple Phone Numbers? - Relational Database Design

    - by user311509
    CREATE TABLE Phone ( phoneID - PK . . . ); CREATE TABLE PhoneDetail ( phoneDetailID - PK phoneID - FK points to Phone phoneTypeID ... phoneNumber ... . . . ); CREATE TABLE Customer ( customerID - PK firstName phoneID - Unique FK points to Phone . . . ); A customer can have multiple phone numbers e.g. Cell, Work, etc. phoneID in Customer table is unique and points to PhoneID in Phone table. If customer record is deleted, phoneID in Phone table should also be deleted. Do you have any concerns on my design? Is this designed properly? My problem is phoneID in Customer table is a child and if child record is deleted then i can not delete the parent (Phone) record automatically.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >