Search Results

Search found 5644 results on 226 pages for 'unique constraints'.

Page 47/226 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How to properly test for constraint violation in hibernate?

    - by Cesar
    I'm trying to test Hibernate mappings, specifically a unique constraint. My POJO is mapped as follows: <property name="name" type="string" unique="true" not-null="true" /> What I want to do is to test that I can't persist two entities with the same name: @Test(expected=ConstraintViolationException.class) public void testPersistTwoExpertiseAreasWithTheSameNameIsNotAllowed(){ ExpertiseArea ea = new ExpertiseArea("Design"); ExpertiseArea otherEA = new ExpertiseArea("Design"); ead.setSession(getSessionFactory().getCurrentSession()); ead.getSession().beginTransaction(); ead.makePersistent(ea); ead.makePersistent(otherEA); ead.getSession().getTransaction().commit(); } On commiting the current transaction, I can see in the logs that a ConstraintViolationException is thrown: 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,571 DEBUG SQL:111 - insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) Hibernate: insert into ExpertiseArea (VERSION, name, id) values (?, ?, ?) 16:08:47,572 WARN JDBCExceptionReporter:100 - SQL Error: -104, SQLState: 23505 16:08:47,572 ERROR JDBCExceptionReporter:101 - integrity constraint violation: unique constraint or index violation; SYS_CT_10036 table: EXPERTISEAREA 16:08:47,573 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update So I would expect the test to pass, since the expected ConstraintViolationException is thrown. However, the test never completes (neither pass nor fails) and I have to manually kill the test runner. What's the correct way to test this?

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • Paperclip and Amazon S3 Issue

    - by Jimmy
    Hey everyone, I have a rails app running on Heroku. I am using paperclip for some simple image uploads for user avatars and some other things, I have S3 set as my backend and everything seems to be working fine except when trying to push to S3 I get the following error: The AWS Access Key Id you provided does not exist in our records. Thinking I mis-pasted my access key and secret key, I tried again, still no luck. Thinking maybe it was just a buggy key I deactivated it and generated a new one. Still no luck. Now for both keys I have used the S3 browser app on OS X and have been able to connect to each and view my current buckets and add/delete buckets. Is there something I should be looking out for? I have my application's S3 and paperclip setup like so development: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] test: bucket: (unique name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] production: bucket: (unique_name) access_key_id: ENV['S3_KEY'] secret_access_key: ENV['S3_SECRET'] has_attached_file :cover, :styles => { :thumb => "50x50" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":class/:id/:style/:filename" Note: I just added the (unique name) bits, those aren't actually there--I have also verified bucket names, but I don't even think this is getting that far. I also have my heroku environment vars setup correctly and have them setup on dev

    Read the article

  • Search in Stack

    - by WPS
    Hi, I've a Java Stack created and some custom objects added to it. These objects contains unique id as one of their field. I need to get the index of that object in stack based on the unique name. Please find the example. class TestVO{ private String name; private String uniqueId; //getters and setters } public class TestStack{ public static void main(String args[]){ TestVO vo1=new TestVO(); TestVO vo2=new TestVO(); TestVO vo3=new TestVO(); vo1.setName("Test Name 1") vo1.setId("123") vo2.setName("Test name 2"); vo2.setId("234"); Stack<TestVO> stack=new Stack<TestVO>(); stack.add(vo1); stack.add(vo2); //I need to get the index of a VO from stack using it's unique ID } } Can someone please help me to implement this?

    Read the article

  • Which itch does a gravatar scratch?

    - by WizardOfOdds
    This is a very serious question: I've seen lots of threads here about gravatars but I couldn't find and answer to this question: what computer identification/authentication (?) problem, if any, are gravatars supposed to solve? Neither the Wikipedia entry nor the official website are very useful. The official website mentions a "globally unique" picture. Unique in what sense? As far as I can see it's only the hash that is unique: two persons can have two pictures looking very similar if not identical. Note that this question is not about which problems do gravatars unarguably cause (like leaking 10% of the stackoverflow.com accounts email addresses like discussed here : "gravatars can leak email adresses" ) but about which authentication (?) problems, if any, are gravatars supposed to solve? Is the goal just to have a cool/funny/cute icon and save bandwith by having it stored on a remote website or is there more to it, like serving a real authentication purpose which I'd be completely missing? Note that I've got nothing against them and find them rather cool, but I'm just having a hard time figuring out what their purpose is and if I should care or not about them in the webapps I'm developping.

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • jQuery HOW TO?? pass additional parameters to success callback for $.ajax call ?

    - by dotnetgeek
    Hello jQuery Ninjas! I am trying, in vain it seems, to be able to pass additional parameters back to the success callback method that I have created for a successful ajax call. A little background. I have a page with a number of dynamically created textbox / selectbox pairs. Each pair having a dynamically assigned unique name such as name="unique-pair-1_txt-url" and name="unique-pair-1_selectBox" then the second pair has the same but the prefix is different. In an effort to reuse code, I have crafted the callback to take the data and a reference to the selectbox. However when the callback is fired the reference to the selectbox comes back as 'undefined'. I read here that it should be doable. I have even tried taking advantage of the 'context' option but still nothing. Here is the script block that I am trying to use: <script type="text/javascript" language="javascript"> $j = jQuery.noConflict(); function getImages(urlValue, selectBox) { $j.ajax({ type: "GET", url: $j(urlValue).val(), dataType: "jsonp", context: selectBox, success:function(data){ loadImagesInSelect(data, $j(this)) } , error:function (xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(thrownError); } }); } function loadImagesInSelect(data, selectBox) { //var select = $j('[name=single_input.<?cs var:op_unique_name ?>.selImageList]'); var select = selectBox; select.empty(); $j(data).each(function() { var theValue = $j(this)[0]["@value"]; var theId = $j(this)[0]["@name"]; select.append("<option value='" + theId + "'>" + theValue + "</option>"); }); select.children(":first").attr("selected", true); } From what I have read, I feel I am close but I just cant put my finger on the missing link. Please help in your typical ninja stealthy ways. TIA

    Read the article

  • Elegantly handling constraint violations in EJB/JPA environment?

    - by hallidave
    I'm working with EJB and JPA on a Glassfish v3 app server. I have an Entity class where I'm forcing one of the fields to be unique with a @Column annotation. @Entity public class MyEntity implements Serializable { private String uniqueName; public MyEntity() { } @Column(unique = true, nullable = false) public String getUniqueName() { return uniqueName; } public void setUniqueName(String uniqueName) { this.uniqueName = uniqueName; } } When I try to persist an object with this field set to a non-unique value I get an exception (as expected) when the transaction managed by the EJB container commits. I have two problems I'd like to solve: 1) The exception I get is the unhelpful "javax.ejb.EJBException: Transaction aborted". If I recursively call getCause() enough times, I eventually reach the more useful "java.sql.SQLIntegrityConstraintViolationException", but this exception is part of the EclipseLink implementation and I'm not really comfortable relying on it's existence. Is there a better way to get detailed error information with JPA? 2) The EJB container insists on logging this error even though I catch it and handle it. Is there a better way to handle this error which will stop Glassfish from cluttering up my logs with useless exception information? Thanks.

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • DDD and MVC: Difference between 'Model' and 'Entity'

    - by Nathan Loding
    I'm seriously confused about the concept of the 'Model' in MVC. Most frameworks that exist today put the Model between the Controller and the database, and the Model almost acts like a database abstraction layer. The concept of 'Fat Model Skinny Controller' is lost as the Controller starts doing more and more logic. In DDD, there is also the concept of a Domain Entity, which has a unique identity to it. As I understand it, a user is a good example of an Entity (unique userid, for instance). The Entity has a life-cycle -- it's values can change throughout the course of the action -- and then it's saved or discarded. The Entity I describe above is what I thought Model was supposed to be in MVC? How off-base am I? To clutter things more, you throw in other patterns, such as the Repository pattern (maybe putting a Service in there). It's pretty clear how the Repository would interact with an Entity -- how does it with a Model? Controllers can have multiple Models, which makes it seem like a Model is less a "database table" than it is a unique Entity. So, in very rough terms, which is better? No "Model" really ... class MyController { public function index() { $repo = new PostRepository(); $posts = $repo->findAllByDateRange('within 30 days'); foreach($posts as $post) { echo $post->Author; } } } Or this, which has a Model as the DAO? class MyController { public function index() { $model = new PostModel(); // maybe this returns a PostRepository? $posts = $model->findAllByDateRange('within 30 days'); while($posts->getNext()) { echo $posts->Post->Author; } } } Both those examples didn't even do what I was describing above. I'm clearly lost. Any input?

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • Find three numbers appeared only once

    - by shilk
    In a sequence of length n, where n=2k+3, that is there are k unique numbers appeared twice and three numbers appeared only once. The question is: how to find the three unique numbers that appeared only once? for example, in sequence 1 1 2 6 3 6 5 7 7 the three unique numbers are 2 3 5. Note: 3<=n<1e6 and the number will range from 1 to 2e9 Memory limits: 1000KB , this implies that we can't store the whole sequence. Method I have tried(Memory limit exceed): I initialize a tree, and when read in one number I try to remove it from the tree, if the remove returns false(not found), I add it to the tree. Finally, the tree has the three numbers. It works, but is Memory limit exceed. I know how to find one or two such number(s) using bit manipulation. So I wonder if we can find three using the same method(or some method similar)? Method to find one/two number(s) appeared only once: If there is one number appeared only once, we can apply XOR to the sequence to find it. If there are two, we can first apply XOR to the sequence, then separate the sequence into 2 parts by one bit of the result that is 1, and again apply XOR to the 2 parts, and we will find the answer.

    Read the article

  • NSDictionary, NSArray, NSSet and efficiency

    - by ryyst
    Hi, I've got a text file, with about 200,000 lines. Each line represents an object with multiple properties. I only search through one of the properties (the unique ID) of the objects. If the unique ID I'm looking for is the same as the current object's unique ID, I'm gonna read the rest of the object's values. Right now, each time I search for an object, I just read the whole text file line by line, create an object for each line and see if it's the object I'm looking for - which is basically the most inefficient way to do the search. I would like to read all those objects into memory, so I can later search through them more efficiently. The question is, what's the most efficient way to perform such a search? Is a 200,000-entries NSArray a good way to do this (I doubt it)? How about an NSSet? With an NSSet, is it possible to only search for one property of the objects? Thanks for any help! -- Ry

    Read the article

  • Server side form validation and POST data

    - by tomcritchlow
    Hi, I have a user input form here: http://www.7bks.com/create (Google login required) When you first create a list you are asked to create a public username. Unfortuantely currently there is no constraint to make this unique. I'm working on the code to enforce unique usernames at the moment and would like to know the best way to do it. Tech details: appengine, python, webapp framework What I'm planning is something like this: first the /create form posts the data to /inputlist/ (this is the same as currently happens) /inputlist/ queries the datastore for the given username. If it already exists then redirect back to /create display the /create page with all the info previously but with an additional error message of "this username is already taken" My question is: Is this the best way of handling server side validation? What's the best way of storing the list details while I verify and modify the username? As I see it I have 3 options to store the list details but I'm not sure which is "best": Store the list details in the session cookie (I am using GAEsessions for cookies) Define a separate POST class for /create and post the list data back from /inputlist/ to the /create page (currently /create only has a GET class) Store the list in the datastore, even though the username is non-unique. Thank you very much for your help :) I'm pretty new to python and coding in general so if I've missed something obvious my apologies. Tom PS - I'm sure I can eventually figure it out but I can't find any documentation on POSTing data using the webapp appengine framework which I'd need in order to do solution 2 above :s maybe you could point me in the right direction for that too? Thanks! PPS - It's a little out of date now but you can see roughly how the /create and /inputlist/ code works at the moment here: 7bks.com Gist

    Read the article

  • Finds in Rails 3 and ActiveRelation

    - by TheDelChop
    Guys, I'm trying to understand the new arel engine in Rails 3 and I've got a question. I've got two models, User and Task class User < ActiveRecord::Base has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end here is my routes to imply the relation: resources :users do resources :tasks end and here is my Tasks controller: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.where(:id => params[:user_id]) end end Problem is, I get the following error when I try to invoke the new action: NoMethodError: undefined method `tasks' for #<ActiveRecord::Relation:0x3dc2488> I am sure my problem is with the new arel engine, does anybody understand what I'm doing wrong? Sorry guys, here is my schema.db file: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email", :default => "", :null => false t.string "encrypted_password", :limit => 128, :default => "", :null => false t.string "password_salt", :default => "", :null => false t.string "reset_password_token" t.string "remember_token" t.datetime "remember_created_at" t.integer "sign_in_count", :default => 0 t.datetime "current_sign_in_at" t.datetime "last_sign_in_at" t.string "current_sign_in_ip" t.string "last_sign_in_ip" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["reset_password_token"], :name => "index_users_on_reset_password_token", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end Thank you, Joe

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Change with jQuery a cell of a table created with JSF

    - by perissf
    From within a xhtml page created with JSF, I need to use JavaScript / jQuery for changing the content of a cell of a table. I know how to assign a unique id to the div containing the table, and to the tbody. I can also assign unique class names to the div itself and to the target column. The target row is identified by the data-rk attribute. <div id="tabForm:centerTabView:personsTable" class="ui-datatable ui-widget personsTable"> <table role="grid"> <tbody id="tabForm:centerTabView:personsTable_data" > <tr data-rk="2" > <td ... /> <td class="lastNameCol" role="gridcell"> <div> To Be Edited </div> </td> <td ... /> </tr> <tr ... /> </tbody> </table> </div> I have tried with many combinations of different jQuery selectors, but I am really lost. I need to search my target row and my target column inside that particular div or inside that particular table, because the xhtml page may contain other tables with different unique ids (and accidentally with the same row and column ids).

    Read the article

  • [Database] How to model this one-to-one relation?

    - by pbean
    I have several entities which respresent different types of users who need to be able to log in to a particular system. Additionally, they have different types of information associated with them. For example: a "general user", which has an e-mail address and "admin user", which has a workstation number (note that this a hypothetical case). Both entities also share common properties like first name, surname, address and telephone number. Finally, they naturally need to have a (unique) user name and a password to log in. In the application, the user just has to fill in his user name and password, and the functionality of the application changes slightly according to the type of the user. You can imagine that the username needs to be unique for this work. How should I model this effectively? I can't just create two tables, because then I can't force a unique constaint on the user name. I also can't put them all in just one table, because they have different types of specific information associated to them. I think I might need 3 seperate tables, one for "users" (with user name and password), one for the "general users" and another one for the "admin users", but how would the relations between these work? Or is there another solution? (By the way, the target DBMS is MySQL, so I don't think generalization is supported in the database system itself).

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • Replacing all GUIDs in a file with new GUIDs from the command line

    - by Josh Petrie
    I have a file containing a large number of occurrences of the string Guid="GUID HERE" (where GUID HERE is a unique GUID at each occurrence) and I want to replace every existing GUID with a new unique GUID. This is on a Windows development machine, so I can generate unique GUIDs with uuidgen.exe (which produces a GUID on stdout every time it is run). I have sed and such available (but no awk oddly enough). I am basically trying to figure out if it is possible (and if so, how) to use the output of a command-line program as the replacement text in a sed substitution expression so that I can make this replacement with a minimum of effort on my part. I don't need to use sed -- if there's another way to do it, such as some crazy vim-fu or some other program, that would work as well -- but I'd prefer solutions that utilize a minimal set of *nix programs since I'm not really on *nix machines. To be clear, if I have a file like this: etc etc Guid="A" etc etc Guid="B" I would like it to become this: etc etc Guid="C" etc etc Guid="D" where A, B, C, D are actual GUIDs, of course. (for example, I have seen xargs used for things similar to this, but it's not available on the machines I need this to run on, either. I could install it if it's really the only way, although I'd rather not)

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

  • How to map combinations of things to a relational database?

    - by Space_C0wb0y
    I have a table whose records represent certain objects. For the sake of simplicity I am going to assume that the table only has one row, and that is the unique ObjectId. Now I need a way to store combinations of objects from that table. The combinations have to be unique, but can be of arbitrary length. For example, if I have the ObjectIds 1,2,3,4 I want to store the following combinations: {1,2}, {1,3,4}, {2,4}, {1,2,3,4} The ordering is not necessary. My current implementation is to have a table Combinations that maps ObjectIds to CombinationIds. So every combination receives a unique Id: ObjectId | CombinationId ------------------------ 1 | 1 2 | 1 1 | 2 3 | 2 4 | 2 This is the mapping for the first two combinations of the example above. The problem is, that the query for finding the CombinationId of a specific Combination seems to be very complex. The two main usage scenarios for this table will be to iterate over all combinations, and the retrieve a specific combination. The table will be created once and never be updated. I am using SQLite through JDBC. Is there any simpler way or a best practice to implement such a mapping?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >