Search Results

Search found 2088 results on 84 pages for 'nhibernate cascade'.

Page 79/84 | < Previous Page | 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Question about the benefit of using an ORM

    - by johnny
    I want to use an ORM for learning purposes and am try nhibernate. I am using the tutorial and then I have a real project. I can go the "old way" or use an ORM. I'm not sure I totally understand the benefit. On the one hand I can create my abstractions in code such that I can change my databases and be database independent. On the other it seems that if I actually change the database columns I have to change all my code. Why wouldn't I have my application without the ORM, change my database and change my code, instead of changing my database, orm, and code? Is it that they database structure doesn't change that much? I believe there are real benefits because ORMs are used by so many. I'm just not sure I get it yet. Thank you. EDIT: In the tutorial they have many files that are used to make the ORM work http://www.hibernate.org/362.html In the event of an application change, it seems like a lot of extra work just to say that I have "proper" abstraction layers. Because I'm new at it it doesn't look that easy to maintain and again seems like extra work, not less.

    Read the article

  • EF4 Code-First CTP5 Many-to-one

    - by Kevin McKelvin
    I've been trying to get EF4 CTP5 to play nice with an existing database, but struggling with some basic mapping issues. I have two model classes so far: public class Job { [Key, Column(Order = 0)] public int JobNumber { get; set; } [Key, Column(Order = 1)] public int VersionNumber { get; set; } public virtual User OwnedBy { get; set; } } and [Table("Usernames")] public class User { [Key] public string Username { get; set; } public string EmailAddress { get; set; } public bool IsAdministrator { get; set; } } And I have my DbContext class exposing those as IDbSet I can query my users, but as soon as I added the OwnedBy field to the Job class I began getting this error in all my tests for the Jobs: Invalid column name 'UserUsername'. I want this to behave like NHibernate's many-to-one, whereas I think EF4 is treating it as a complex type. How should this be done?

    Read the article

  • Why does the entity framework need an ICollection for lazy loading?

    - by Akk
    I want to write a rich domain class such as public class Product { public IEnumerable<Photo> Photos {get; private set;} public void AddPhoto(){...} public void RemovePhoto(){...} } But the entity framework (V4 code first approach) requires an ICollection type for lazy loading! The above code no longer works as designed since clients can bypass the AddPhoto / RemovePhoto method and directly call the add method on ICollection. This is not good. public class Product { public ICollection<Photo> Photos {get; private set;} //Bad public void AddPhoto(){...} public void RemovePhoto(){...} } It's getting really frustrating trying to implement DDD with the EF4. Why did they choose the ICollection for lazy loading? How can i overcome this? Does NHibernate offer me a better DDD experience?

    Read the article

  • Asp.net mvc inheritance controllers

    - by Ris90
    Hi, I'm studing asp.net mvc and in my test project I have some problems with inheritance: In my model I use inheritanse in few entities: public class Employee:Entity { /* few public properties */ } It is the base class. And descendants: public class RecruitmentOfficeEmployee: Employee { public virtual RecruitmentOffice AssignedOnRecruitmentOffice { get; set; } } public class ResearchInstituteEmployee: Employee { public virtual ResearchInstitute AssignedOnResearchInstitute { get; set; } } I want to implement a simple CRUD operations to every descedant. What is the better way to inplement controllers and views in descendants: - One controller per every descendant; - Controller inheritance; - Generic controller; - Generic methods in one controller. Or maybe there is an another way? My ORM is NHibernate, I have a generic base repository and every repository is its descedant. Using generic controller, I think, is the best way, but in it I will use only generic base repository and extensibility of the system will be not very good. Please, help the newbie)

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • If I cast an IQueryable as an IEnumerable then call a Linq extension method, which implementation gets called?

    - by James Morcom
    Considering the following code: IQueryable<T> queryable; // something to instantiate queryable var enumerable = (IEnumerable<T>) queryable; var filtered = enumerable.Where(i => i > 3); In the final line, which extension method gets called? Is it IEnumerable<T>.Where(...)? Or will IQueryable<T>.Where(...) be called because the actual implementation is still obviously a queryable? Presumably the ideal would be for the IQueryable version to be called, in the same way that normal polymorphism will always use the more specific override. In Visual Studio though when I right-click on the Where method and "Go to Definition" I'm taken to the IEnumerable version, which kind of makes sense from a visual point-of-view. My main concern is that if somewhere in my app I use Linq to NHibernate to get a Queryable, but I pass it around using an interface that uses the more general IEnumerable signature, I'll lose the wonders of deferred database execution!

    Read the article

  • Mixing stored procedures and ORM

    - by Jason
    The company I work for develops a large application which is almost entirely based on stored procedures. We use classic ASP and SQL Server and the major part of the business logic is contained inside those stored procedures. For example, (I know, this is bad...) a single stored procedure can be used for different purposes (insert, update, delete, make some calculations, ...). Most of the time, a stored procedure is used for operations on related tables, but this is not always the case. We are planning to move to ASP.NET in a near future. I have read a lot of posts on StackOverflow recommending that I move the business logic outside the database. The thing is, I have tried to convince the people who takes the decisions at our company and there is nothing I can do to change their mind. Since I want to be able to use the advantages of object-oriented programming, I want to map the tables to actual classes. So far, my solution is to use an ORM (Entity Framework 4 or nHibernate) to avoid mapping the objects manually (mostly to retrieve the data) and use some kind of Data Access Layer to call the existing stored procedures (for saving). I want your advice on this. Do you think it is a good solution? Any ideas?

    Read the article

  • Database (and ORM) choice for an small-medium size .NET Application

    - by jim
    I have a requirement to develop a .NET-based application whose data requirements are likely to exceed the 4 gig limit of SQL 2005 Express Edition. There may be other customers of the same application (in the future) with a requirement to use a specific DB platform (such as Oracle or SQL Server) due to in-house DBA expertise. Questions What RDBMS would you guys recommend? From the looks of it the major choices are PostGreSQL, MySQL or FireBird. I've only got experience of MYSQL from these. Which ORM tool (if any) would you recommend using - ideally one that can be swapped out between DB platforms with minimal effort? I like the look of the entity framework but unsure as to the degree to which platforms other than SQL Server are supported. If it helps, we'll be using the 3.5 version of the Framework. I'm open to the idea of using a tool such as NHibernate. On the other hand, if it's going to be easier, I'm happy to write my own stored procedures / DAL code - there won't be that many tables (perhaps 30-35).

    Read the article

  • java 7 upgrade and hibernate annotation processor error

    - by Bill Turner
    I am getting the following warning, which seems to be triggering a subsequent warning and an error. I have been googling like mad, though have not found anything that makes it clear what it is I should do to resolve this. This issue occurs when I execute an Ant build. I am trying to migrate our project to Java 7. I have changed all the source='1.6' and target="1.6" to 1.7. I did find this related article: Forward compatible Java 6 annotation processor and SupportedSourceVersion It seems to indicate that I should build the Hibernate annotation processor jar myself, compiling it with with 1.7. It does not seem I should be required to do so. The latest version of the class in question (in hibernate-validator-annotation-processor-5.0.1.Final.jar) has been compiled with 1.6. Since the code in said class refers to SourceVersion.latestSupported(), and the 1.6 of that returns only RELEASE_6, there does not seem to be a generally available solution. Here is the warning: [javac] warning: Supported source version 'RELEASE_6' from annotation processor 'org.hibernate.validator.ap.ConstraintValidationProcessor' less than -source '1.7' And, here are the subsequent warnings/error. [javac] warning: No processor claimed any of these annotations: javax.persistence.PersistenceContext,javax.persistence.Column,org.codehaus.jackson.annotate.JsonIgnore,javax.persistence.Id,org.springframework.context.annotation.DependsOn,com.trgr.cobalt.infrastructure.datasource.Bucketed,org.codehaus.jackson.map.annotate.JsonDeserialize,javax.persistence.DiscriminatorColumn,com.trgr.cobalt.dataroom.authorization.secure.Secured,org.hibernate.annotations.GenericGenerator,javax.annotation.Resource,com.trgr.cobalt.infrastructure.spring.domain.DomainField,org.codehaus.jackson.annotate.JsonAutoDetect,javax.persistence.DiscriminatorValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionMandatory,org.springframework.stereotype.Repository,javax.persistence.GeneratedValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactional,org.hibernate.annotations.Cascade,javax.persistence.Table,javax.persistence.Enumerated,org.hibernate.annotations.FilterDef,javax.persistence.OneToOne,com.trgr.cobalt.dataroom.datasource.config.core.CoreEntity,org.springframework.transaction.annotation.Transactional,com.trgr.cobalt.infrastructure.util.enums.EnumConversion,org.springframework.context.annotation.Configuration,com.trgr.cobalt.infrastructure.spring.domain.UpdatedFields,com.trgr.cobalt.infrastructure.spring.documentation.SampleValue,org.springframework.context.annotation.Bean,org.codehaus.jackson.annotate.JsonProperty,javax.persistence.Basic,org.codehaus.jackson.map.annotate.JsonSerialize,com.trgr.cobalt.infrastructure.spring.validation.Required,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionNever,org.springframework.context.annotation.Profile,com.trgr.cobalt.infrastructure.spring.stereotype.Persistor,javax.persistence.Transient,com.trgr.cobalt.infrastructure.spring.validation.NotNull,javax.validation.constraints.Size,javax.persistence.Entity,javax.persistence.PrimaryKeyJoinColumn,org.hibernate.annotations.BatchSize,org.springframework.stereotype.Service,org.springframework.beans.factory.annotation.Value,javax.persistence.Inheritance [javac] error: warnings found and -Werror specified TIA!

    Read the article

  • Getting problem while capturing image through web cam in Java -OpenCV code.

    - by Chetan
    Hi.. I am developing Face detector using OpenCV library in java... I have written sample code for this,but it is giving Error while capturing image. can any one help please. Here is the code import java.awt.; import java.awt.event.; import java.awt.image.MemoryImageSource; import hypermedia.video.OpenCV; @SuppressWarnings("serial") public class FaceDetection extends Frame implements Runnable { ///Main method. public static void main(String[] args) { //System.out.println( "\nOpenCV face detection sample\n" ); new FaceDetection(); } // program execution frame rate (millisecond) final int FRAME_RATE = 1000/30; OpenCV cv = null; // OpenCV Object Thread t = null; // the sample thread // the input video stream image Image frame = null; // list of all face detected area Rectangle[] squares = new Rectangle[0]; //** Setup Frame and Object(s). FaceDetection() { super( "Face Detection" ); // OpenCV setup cv = new OpenCV(); //cv.capture(1, 1, 100); cv.capture( 320, 240 ); cv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // frame setup this.setBounds( 100, 100, cv.width, cv.height ); this.setBackground( Color.BLACK ); this.setVisible( true ); this.addKeyListener( new KeyAdapter() { public void keyReleased( KeyEvent e ) { if ( e.getKeyCode()==KeyEvent.VK_ESCAPE ) { // ESC : release OpenCV resources cv.dispose(); System.exit(0); } } } ); // start running program t = new Thread( this ); t.start(); } // Draw video frame and each detected faces area. public void paint( Graphics g ) { // draw image g.drawImage( frame, 0, 0, null ); // draw squares g.setColor( Color.RED ); for( Rectangle rect : squares ) g.drawRect( rect.x, rect.y, rect.width, rect.height ); } @SuppressWarnings("static-access") public void run() { while( t!=null && cv!=null ) { try { t.sleep( FRAME_RATE ); // grab image from video stream cv.read(); // create a new image from cv pixels data MemoryImageSource mis = new MemoryImageSource( cv.width, cv.height, cv.pixels(), 0, cv.width ); frame = createImage( mis ); // detect faces squares = cv.detect( 1.2f, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 20, 20 ); // of course, repaint repaint(); } catch( InterruptedException e ) {;} } } } Error while starting capture : device 0

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • JPA @OneToMany and composite PK

    - by Fleuri F
    Good Morning, I am working on project using JPA. I need to use a @OneToMany mapping on a class that has three primary keys. You can find the errors and the classes after this. If anyone has an idea! Thanks in advance! FF javax.persistence.PersistenceException: No Persistence provider for EntityManager named JTA_pacePersistence: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: javax.persistence.PersistenceException javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [JTA_pacePersistence] failed. Internal Exception: Exception [TOPLINK-7220] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: The @JoinColumns on the annotated element [private java.util.Set isd.pacepersistence.common.Action.permissions] from the entity class [class isd.pacepersistence.common.Action] is incomplete. When the source entity class uses a composite primary key, a @JoinColumn must be specified for each join column using the @JoinColumns. Both the name and the referenceColumnName elements must be specified in each such @JoinColumn. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:196) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at isd.pacepersistence.common.DataMapper.(Unknown Source) at isd.pacepersistence.server.MainServlet.getDebugCase(Unknown Source) at isd.pacepersistence.server.MainServlet.doGet(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:718) at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:290) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202) There is the source code of my classes : Action : @Entity @Table(name="action") public class Action { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int num; @ManyToOne(cascade= { CascadeType.PERSIST, CascadeType.MERGE, CascadeType.REFRESH }) @JoinColumn(name="domain_num") private Domain domain; private String name; private String description; @OneToMany @JoinTable(name="permission", joinColumns= { @JoinColumn(name="action_num", referencedColumnName="action_num", nullable=false, updatable=false) }, inverseJoinColumns= { @JoinColumn(name="num") }) private Set<Permission> permissions; public Action() { } Permission : @SuppressWarnings("serial") @Entity @Table(name="permission") public class Permission implements Serializable { @EmbeddedId private PermissionPK primaryKey; @ManyToOne @JoinColumn(name="action_num", insertable=false, updatable=false) private Action action; @ManyToOne @JoinColumn(name="entity_num", insertable=false, updatable=false) private isd.pacepersistence.common.Entity entity; @ManyToOne @JoinColumn(name="class_num", insertable=false, updatable=false) private Clazz clazz; private String kondition; public Permission() { } PermissionPK : @SuppressWarnings("serial") @Entity @Table(name="permission") public class Permission implements Serializable { @EmbeddedId private PermissionPK primaryKey; @ManyToOne @JoinColumn(name="action_num", insertable=false, updatable=false) private Action action; @ManyToOne @JoinColumn(name="entity_num", insertable=false, updatable=false) private isd.pacepersistence.common.Entity entity; @ManyToOne @JoinColumn(name="class_num", insertable=false, updatable=false) private Clazz clazz; private String kondition; public Permission() { }

    Read the article

  • Eclipselink read complex object model in an ordered way

    - by Raven
    Hi, I need to read a complex model in an ordered way with eclipselink. The order is mandantory because it is a huge database and I want to have an output of a small portion of the database in a jface tableview. Trying to reorder it in the loading/quering thread takes too long and ordering it in the LabelProvider blocks the UI thread too much time, so I thought if Eclipselink could be used that way, that the database will order it, it might give me the performance I need. Unfortunately the object model can not be changed :-( The model is something like: @SuppressWarnings("serial") @Entity public class Thing implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; @OneToMany(cascade=CascadeType.ALL) @PrivateOwned private List<Property> properties = new ArrayList<Property>(); ... // getter and setter following here } public class Property implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; @OneToOne private Item item; private String value; ... // getter and setter following here } public class Item implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; .... // getter and setter following here } // Code end In the table view the y-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Thing m ORDER BY m.name ASC"); using the "name" attribute from the Thing objects as label. In the table view the x-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Item m ORDER BY m.name ASC"); using the "name" attribute from the Item objects as label. Each cell has the value Things.getProperties().get[x].getValue() Unfortunately the list "properties" is not ordered, so the combination of cell value and x-axis column number (x) is not necessarily correct. Therefore I need to order the list "properties" in the same way as I ordered the labeling of the x-axis. And exactly this is the thing I dont know how it is done. So querying for the Thing objects should return the list "properties" "ORDER BY name ASC" but of the "Item"s objects. My ideas are something like having a query with two JOINs. Joing Things with Property and with Item but somehow I was unable to get it to work yet. Thank you for your help and your ideas to solve this riddle.

    Read the article

  • JPA @ManyToOne and composite PK

    - by Fleuri F
    Good Morning, I am working on project using JPA. I need to use a @ManyToOne mapping on a class that has three primary keys. You can find the errors and the classes after this. If anyone has an idea! Thanks in advance! FF javax.persistence.PersistenceException: No Persistence provider for EntityManager named JTA_pacePersistence: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: javax.persistence.PersistenceException javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [JTA_pacePersistence] failed. Internal Exception: Exception [TOPLINK-7220] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: The @JoinColumns on the annotated element [private java.util.Set isd.pacepersistence.common.Action.permissions] from the entity class [class isd.pacepersistence.common.Action] is incomplete. When the source entity class uses a composite primary key, a @JoinColumn must be specified for each join column using the @JoinColumns. Both the name and the referenceColumnName elements must be specified in each such @JoinColumn. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:196) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at isd.pacepersistence.common.DataMapper.(Unknown Source) at isd.pacepersistence.server.MainServlet.getDebugCase(Unknown Source) at isd.pacepersistence.server.MainServlet.doGet(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:718) at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:290) at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202) There is the source code of my classes : Action : @Entity @Table(name="action") public class Action { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private int num; @ManyToOne(cascade= { CascadeType.PERSIST, CascadeType.MERGE, CascadeType.REFRESH }) @JoinColumn(name="domain_num") private Domain domain; private String name; private String description; @OneToMany @JoinTable(name="permission", joinColumns= { @JoinColumn(name="action_num", referencedColumnName="action_num", nullable=false, updatable=false) }, inverseJoinColumns= { @JoinColumn(name="num") }) private Set<Permission> permissions; public Action() { } Permission : @SuppressWarnings("serial") @Entity @Table(name="permission") public class Permission implements Serializable { @EmbeddedId private PermissionPK primaryKey; @ManyToOne @JoinColumn(name="action_num", insertable=false, updatable=false) private Action action; @ManyToOne @JoinColumn(name="entity_num", insertable=false, updatable=false) private isd.pacepersistence.common.Entity entity; @ManyToOne @JoinColumn(name="class_num", insertable=false, updatable=false) private Clazz clazz; private String kondition; public Permission() { } PermissionPK : @SuppressWarnings("serial") @Entity @Table(name="permission") public class Permission implements Serializable { @EmbeddedId private PermissionPK primaryKey; @ManyToOne @JoinColumn(name="action_num", insertable=false, updatable=false) private Action action; @ManyToOne @JoinColumn(name="entity_num", insertable=false, updatable=false) private isd.pacepersistence.common.Entity entity; @ManyToOne @JoinColumn(name="class_num", insertable=false, updatable=false) private Clazz clazz; private String kondition; public Permission() { }

    Read the article

  • Eclipselink read complex oject model in an ordered way

    - by Raven
    Hi, I need to read a complex model in an ordered way with eclipselink. The order is mandantory because it is a huge database and I want to have an output of a small portion of the database in a jface tableview. Trying to reorder it in the loading/quering thread takes too long and ordering it in the LabelProvider blocks the UI thread too much time, so I thought if Eclipselink could be used that way, that the database will order it, it might give me the performance I need. Unfortunately the object model can not be changed :-( The model is something like: @SuppressWarnings("serial") @Entity public class Thing implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; @OneToMany(cascade=CascadeType.ALL) @PrivateOwned private List<Property> properties = new ArrayList<Property>(); ... // getter and setter following here } public class Property implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; @OneToOne private Item item; private String value; ... // getter and setter following here } public class Item implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; .... // getter and setter following here } // Code end In the table view the y-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Thing m ORDER BY m.name ASC"); using the "name" attribute from the Thing objects as label. In the table view the x-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Item m ORDER BY m.name ASC"); using the "name" attribute from the Item objects as label. Each cell has the value Things.getProperties().get[x].getValue() Unfortunately the list "properties" is not ordered, so the combination of cell value and x-axis column number (x) is not necessarily correct. Therefore I need to order the list "properties" in the same way as I ordered the labeling of the x-axis. And exactly this is the thing I dont know how it is done. So querying for the Thing objects should return the list "properties" "ORDER BY name ASC" but of the "Item"s objects. My ideas are something like having a query with two JOINs. Joing Things with Property and with Item but somehow I was unable to get it to work yet. Thank you for your help and your ideas to solve this riddle.

    Read the article

  • ON DELETE RESTRICT Causing Error 150

    - by Levi Hackwith
    CREATE TABLE project ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, name VARCHAR(75) NOT NULL, description LONGTEXT NOT NULL, is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id), INDEX(name, created_at) ) ENGINE = INNODB; CREATE TABLE role ( id INTEGER NOT NULL, name VARCHAR(50) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE organization ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, name VARCHAR(100) NOT NULL, is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE user ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, role_id INTEGER NOT NULL, organization_id INTEGER NOT NULL, last_login_at DATETIME NOT NULL, last_ip_address VARCHAR(25) NOT NULL, username VARCHAR(45) NOT NULL, password CHAR(32) NOT NULL, email_address VARCHAR(255) NOT NULL, first_name VARCHAR(45) NOT NULL, last_name VARCHAR(45) NOT NULL, address_1 VARCHAR(100) NOT NULL, address_2 VARCHAR(25) NULL, city VARCHAR(25) NOT NULL, state CHAR(2) NOT NULL, zip_code VARCHAR(10) NOT NULL, primary_phone_number VARCHAR(10) NOT NULL, secondary_phone_number VARCHAR(10) NOT NULL, is_primary_organization_contact TINYINT NOT NULL DEFAULT '0', is_active TINYINT NOT NULL DEFAULT '1', PRIMARY KEY (id), CONSTRAINT fk_user_role_id FOREIGN KEY (role_id) REFERENCES role (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_user_organization_id FOREIGN KEY (organization_id) REFERENCES organization (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE project_user ( user_id INTEGER NOT NULL, project_id INTEGER NOT NULL, PRIMARY KEY (user_id, project_id), CONSTRAINT fk_project_user_user_id FOREIGN KEY (user_id) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE CASCADE, CONSTRAINT fk_project_user_project_id FOREIGN KEY (project_id) REFERENCES project (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE ticket_category ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket_type ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket_status ( id INTEGER NOT NULL AUTO_INCREMENT, name VARCHAR(20) NOT NULL, description LONGTEXT NOT NULL, PRIMARY KEY (id) ) ENGINE = INNODB; CREATE TABLE ticket ( id INTEGER NOT NULL AUTO_INCREMENT, created_at DATETIME NOT NULL, project_id INTEGER NOT NULL, created_by INTEGER NOT NULL, submitted_by INTEGER NOT NULL, assigned_to INTEGER NULL, category_id INTEGER NOT NULL, type_id INTEGER NOT NULL, title VARCHAR(75) NOT NULL, description LONGTEXT NOT NULL, contact_type_id TINYINT NOT NULL, affects_all_clients TINYINT NOT NULL DEFAULT '0', is_billable TINYINT NOT NULL DEFAULT '1', esimated_hours DECIMAL(4, 1) NOT NULL DEFAULT '0', hours_worked DECIMAL (4, 1) NOT NULL DEFAULT '0', status_id TINYINT NOT NULL, PRIMARY KEY (id), CONSTRAINT fk_ticket_project_id FOREIGN KEY (project_id) REFERENCES project (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_created_by FOREIGN KEY (created_by) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_submitted_by FOREIGN KEY (submitted_by) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_assigned_to FOREIGN KEY (assigned_to) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_category_id FOREIGN KEY (category_id) REFERENCES ticket_category (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_type_id FOREIGN KEY (type_id) REFERENCES ticket_type (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_status_id FOREIGN KEY (status_id) REFERENCES ticket_status (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; CREATE TABLE ticket_time_entry ( id INTEGER NOT NULL AUTO_INCREMENT, user_id INTEGER NOT NULL, ticket_id INTEGER NOT NULL, started_at DATETIME NOT NULL, ended_at DATETIME NOT NULL, PRIMARY KEY (id), CONSTRAINT fk_ticket_time_entry_user_id FOREIGN KEY (user_id) REFERENCES user (id) ON UPDATE RESTRICT ON DELETE RESTRICT, CONSTRAINT fk_ticket_time_entry_ticket_id FOREIGN KEY (ticket_id) REFERENCES ticket (id) ON UPDATE RESTRICT ON DELETE RESTRICT ) ENGINE = INNODB; The ticket table's create statement causes an error 150. I have no clue why. When I remove the ON DELETE RESTRICT statements from the table declaration, it works. Why is that?

    Read the article

  • Getting problem in Java OpenCV code.

    - by Chetan
    I had successfully compile my java code in Eclipse with name FaceDetection.java... I am getting an Exception in thread "main" java.lang.NoSuchMethodError: main.... Please help me to remove this Exception.. Here is the code import java.awt.; import java.awt.event.; import java.awt.image.MemoryImageSource; import hypermedia.video.OpenCV; @SuppressWarnings("serial") public class FaceDetection extends Frame implements Runnable { /** * Main method. * @param String[] a list of user's arguments passed from the console to this program */ public static void main( String[] args ) { System.out.println( "\nOpenCV face detection sample\n" ); new FaceDetection(); } // program execution frame rate (millisecond) final int FRAME_RATE = 1000/30; OpenCV cv = null; // OpenCV Object Thread t = null; // the sample thread // the input video stream image Image frame = null; // list of all face detected area Rectangle[] squares = new Rectangle[0]; /** * Setup Frame and Object(s). */ FaceDetection() { super( "Face Detection Sample" ); // OpenCV setup cv = new OpenCV(); cv.capture( 320, 240 ); cv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT ); // frame setup this.setBounds( 100, 100, cv.width, cv.height ); this.setBackground( Color.BLACK ); this.setVisible( true ); this.addKeyListener( new KeyAdapter() { public void keyReleased( KeyEvent e ) { if ( e.getKeyCode()==KeyEvent.VK_ESCAPE ) { // ESC : release OpenCV resources cv.dispose(); System.exit(0); } } } ); // start running program t = new Thread( this ); t.start(); } /** * Draw video frame and each detected faces area. */ public void paint( Graphics g ) { // draw image g.drawImage( frame, 0, 0, null ); // draw squares g.setColor( Color.RED ); for( Rectangle rect : squares ) g.drawRect( rect.x, rect.y, rect.width, rect.height ); } /** * Execute this sample. */ @SuppressWarnings("static-access") public void run() { while( t!=null && cv!=null ) { try { t.sleep( FRAME_RATE ); // grab image from video stream cv.read(); // create a new image from cv pixels data MemoryImageSource mis = new MemoryImageSource( cv.width, cv.height, cv.pixels(), 0, cv.width ); frame = createImage( mis ); // detect faces squares = cv.detect( 1.2f, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 20, 20 ); // of course, repaint repaint(); } catch( InterruptedException e ) {;} } } }

    Read the article

  • OneToOne JPA / Hibernate eager loading cause N+1 select

    - by Alexandre Lavoie
    I created a method to have multilingual text on different objects without creating field for each languages or tables for each objects types. Now the only problem I've got is N+1 select queries when doing a simple loading. Tables schema : CREATE TABLE `testentities` ( `keyTestEntity` int(11) NOT NULL, `keyMultilingualText` int(11) NOT NULL, PRIMARY KEY (`keyTestEntity`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts` ( `keyMultilingualText` int(11) NOT NULL auto_increment, PRIMARY KEY (`keyMultilingualText`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts_values` ( `languageCode` varchar(5) NOT NULL, `keyMultilingualText` int(11) NOT NULL, `value` text, PRIMARY KEY (`languageCode`,`keyMultilingualText`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; MultilingualText.java @Entity @Table(name = "common_multilingualtexts") public class MultilingualText implements Serializable { private Integer m_iKeyMultilingualText; private Map<String, String> m_lValues = new HashMap<String, String>(); public void setKeyMultilingualText(Integer p_iKeyMultilingualText) { m_iKeyMultilingualText = p_iKeyMultilingualText; } @Id @GeneratedValue @Column(name = "keyMultilingualText") public Integer getKeyMultilingualText() { return m_iKeyMultilingualText; } public void setValues(Map<String, String> p_lValues) { m_lValues = p_lValues; } @ElementCollection(fetch = FetchType.EAGER) @CollectionTable(name = "common_multilingualtexts_values", joinColumns = @JoinColumn(name = "keyMultilingualText")) @MapKeyColumn(name = "languageCode") @Column(name = "value") public Map<String, String> getValues() { return m_lValues; } public void put(String p_sLanguageCode, String p_sValue) { m_lValues.put(p_sLanguageCode,p_sValue); } public String get(String p_sLanguageCode) { if(m_lValues.containsKey(p_sLanguageCode)) { return m_lValues.get(p_sLanguageCode); } return null; } } And it is used like this on a object (having a foreign key to the multilingual text) : @Entity @Table(name = "testentities") public class TestEntity implements Serializable { private Integer m_iKeyEntity; private MultilingualText m_oText; public void setKeyEntity(Integer p_iKeyEntity) { m_iKeyEntity = p_iKeyEntity; } @Id @GeneratedValue @Column(name = "keyEntity") public Integer getKeyEntity() { return m_iKeyEntity; } public void setText(MultilingualText p_oText) { m_oText = p_oText; } @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "keyText") public MultilingualText getText() { return m_oText; } } Now, when doing a simple HQL query : from TestEntity, I get a query selecting TestEntity's and one query for each MultilingualText that need to be loaded on each TestEntity. I've searched a lot and found absolutely no solutions. I have tested : @Fetch(FetchType.JOIN) optional = false @ManyToOne instead of @OneToOne Now I am out of idea!

    Read the article

  • JSF: how to update the list after delete an item of that list

    - by Harry Pham
    It will take a moment for me to explain this, so please stay with me. I have table COMMENT that has OneToMany relationship with itself. @Entity public class Comment(){ ... @ManyToOne(optional=true, fetch=FetchType.LAZY) @JoinColumn(name="REPLYTO_ID") private Comment replyTo; @OneToMany(mappedBy="replyTo", cascade=CascadeType.ALL) private List<Comment> replies = new ArrayList<Comment>(); public void addReply(NewsFeed reply){ replies.add(reply); reply.setReplyTo(this); } public void removeReply(NewsFeed reply){ replies.remove(reply); } } So you can think like this. Each comment can have a List of replies which are also type Comment. Now it is very easy for me to delete the original comment and get the updated list back. All I need to do after delete is this. allComments = myEJB.getAllComments(); //This will query the db and return updated list But I am having problem when trying to delete replies and getting the updated list back. So here is how I delete the replies. Inside my managed bean I have //Before invoke this method, I have the value of originalFeed, and deletedFeed set. //These original comments are display inside a p:dataTable X, and the replies are //displayed inside p:dataTable Y which is inside X. So when I click the delete button //I know which comment I want to delete, and if it is the replies, I will know //which one is its original post public void deleteFeed(){ if(this.deletedFeed != null){ scholarEJB.deleteFeeds(this.deletedFeed); if(this.originalFeed != null){ //Since the originalFeed is not null, this is the `replies` //that I want to delete scholarEJB.removeReply(this.originalFeed, this.deletedFeed); } feeds = scholarEJB.findAllFeed(); } } Then inside my EJB scholarEJB, I have public void removeReply(NewsFeed comment, NewsFeed reply){ comment = em.merge(comment); comment.removeReply(reply); em.persist(comment); } public void deleteFeeds(NewsFeed e){ e = em.find(NewsFeed.class, e.getId()); em.remove(e); } When I get out, the entity (the reply) get correctly removed from the database, but inside the feeds List, reference of that reply still there. It only until I log out and log back in that the reply disappear. Please help

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • Postgresql: Implicit lock acquisition from foreign-key constraint evaluation

    - by fennec
    So, I'm being confused about foreign key constraint handling in Postgresql. (version 8.4.4, for what it's worth). We've got a couple of tables, mildly anonymized below: device: (id, blah, blah, blah, blah, blah x 50)… primary key on id whooooole bunch of other junk device_foo: (id, device_id, left, right) Foreign key (device_id) references device(id) on delete cascade; primary key on id btree index on 'left' and 'right' So I set out with two database windows to run some queries. db1> begin; lock table device in exclusive mode; db2> begin; update device_foo set left = left + 1; The db2 connection blocks. It seems odd to me that an update of the 'left' column on device_stuff should be affected by activity on the device table. But it is. In fact, if I go back to db1: db1> select * from device_stuff for update; *** deadlock occurs *** The pgsql log has the following: blah blah blah deadlock blah. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."device" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF X: update device_foo set left = left + 1; I suppose I've got two issues: the first is that I don't understand the precise mechanism by which this sort of locking occurs. I have got a couple of useful queries to query pg_locks to see what sort of locks a statement invokes, but I haven't been able to observe this particular sort of locking when I run the update device_foo command in isolation. (Perhaps I'm doing something wrong, though.) I also can't find any documentation on the lock acquisition behavior of foreign-key constraint checks. All I have is a log message. Am I to infer from this that any change to a row will acquire an update lock on all the tables which it's foreign-keyed against? The second issue is that I'd like to find some way to make it not happen like that. I'm ending up with occasional deadlocks in the actual application. I'd like to be able to run big update statements that impact all rows on device_foo without acquiring a big lock on the device table. (There's a lot of access going on in the device table, and it's kind of an expensive lock to get.)

    Read the article

  • Hibernate mapping to object that already exists

    - by teehoo
    I have two classes, ServiceType and ServiceRequest. Every ServiceRequest must specify what kind of ServiceType it is. All ServiceType's are predefined in the database, and ServiceRequest is created at runtime by the client. Here are my .hbm files: <hibernate-mapping> <class dynamic-insert="false" dynamic-update="false" mutable="true" name="xxx.model.entity.ServiceRequest" optimistic-lock="version" polymorphism="implicit" select-before-update="false"> <id column="USER_ID" name="id"> <generator class="native"/> </id> <property name="quantity"> <column name="quantity" not-null="true"/> </property> <many-to-one cascade="all" class="xxx.model.entity.ServiceType" column="service_type" name="serviceType" not-null="false" unique="false"/> </class> </hibernate-mapping> and <hibernate-mapping> <class dynamic-insert="false" dynamic-update="false" mutable="true" name="xxx.model.entity.ServiceType" optimistic-lock="version" polymorphism="implicit" select-before-update="false"> <id column="USER_ID" name="id"> <generator class="native"/> </id> <property name="description"> <column name="description" not-null="false"/> </property> <property name="cost"> <column name="cost" not-null="true"/> </property> <property name="enabled"> <column name="enabled" not-null="true"/> </property> </class> </hibernate-mapping> When I run this, I get com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Cannot add or update a child row: a foreign key constraint fails I think my problem is that when I create a new ServiceRequest object, ServiceType is one of its properties, and therefore when I'm saving ServiceRequest to the database, Hibernate attempts to insert the ServiceType object once again, and finds that it is already exists. If this is the case, how do I make it so that Hibernate points to the exists ServiceType instead of trying to insert it again?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84  | Next Page >