Search Results

Search found 5709 results on 229 pages for 'persistence unit'.

Page 168/229 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Cascading persist and existing object

    - by user322061
    Hello, I am working with JPA and I would like to persist an object (Action) composed of an object (Domain). There is the Action class code: @Entity(name="action") @Table(name="action") public class Action { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="num") private int num; @OneToOne(cascade= { CascadeType.PERSIST, CascadeType.MERGE, CascadeType.REFRESH }) @JoinColumn(name="domain_num") private Domain domain; @Column(name="name") private String name; @Column(name="description") private String description; public Action() { } public Action(Domain domain, String name, String description) { super(); this.domain=domain; this.name=name; this.description=description; } public int getNum() { return num; } public Domain getDomain() { return domain; } public String getName() { return name; } public String getDescription() { return description; } } When I persist an action with a new Domain, it works. Action and Domain are persisted. But if I try to persist an Action with an existing Domain, I get this error: javax.persistence.EntityExistsException: Exception Description: Cannot persist detached object [isd.pacepersistence.common.Domain@1716286]. Class> isd.pacepersistence.common.Domain Primary Key> [8] How can I persist my Action and automatically persist a Domain if it does not exist? If it exists, how can I just persist the Action and link it with the existing Domain. Best Regards, FF

    Read the article

  • JPA - Setting entity class property from calculated column?

    - by growse
    I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do. I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query. If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine. However, if I do the below: Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti @@ q order by ts_rank(idxfti,q) desc",Article.class); (this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery) You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query? So far, I've tried setting it to be @Transient, but that just ends up with it being null all the time.

    Read the article

  • Silverlight Data Access - how to keep the gruntwork on the server

    - by akaphenom
    What technologies are used / recommended for HTTP Rpc Calls from Silverlight. My Server Side stack is JBoss (servlets / json_rpc [jabsorb]), and we have a ton of business logic (object creation, validation, persistence, server side events) in place that I still want to take advantage of. This is our first attempt at bringing an applet style ria to our product, and ideally we keep both HTML and Silverlight versions. For better or worse the powers that be have pushed us down the silverlight path, and while flex / java fx / silverlight is an interesting debate, that question is removed from the equation. We just have to find a way to get silverlight to behave with our classes. Should I be defining .NET Class representation of our JSON objects and the methodology to serialize / deserialize access to those objects? IE "blah.com/dispenseRpc?servlet=xxxx&p1=blah&p2=blahblah creating functions that invoke the web request and convert the incomming response string to objects? Another way would be to reverse engineer the .NET wcf(or whatever) communications and implement the handler on the Java side that invokes the correct server side code and returns what .NET expects back. But that sounds much trickier. T

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • Anyone have an XSL to convert Boost.Test XML logs to a presentable format?

    - by Stuart Lange
    I have some C++ projects running through cruisecontrol.net. As a part of the build process, we compile and run Boost.Test unit test suites. I have these configured to dump XML log files. While the format is similar to JUnit/NUnit, it's not quite the same (and lacks some information), so cruisecontrol.net is unable to pick them up. I am wondering if anyone has created (or knows of) an existing XSL transform that will convert Boost.Test results to JUnit/NUnit format, or alternatively, directly to a presentable (html) format. Thanks!

    Read the article

  • C++ linking issue on Visual Studio 2008 when crosslinking different projects on same solution

    - by Luís Guilherme
    I'm using Google Test Framework to set some unit tests. I have got three projects in my solution: FN (my project) FN_test (my tests) gtest (Google Test Framework) I set FN_test to have FN and gtest as references (dependencies), and then I think I'm ready to set up my tests (I've already set everyone to /MTd (not doing this was leading me to linking errors before)). Particularly, I define a class called Embark in FN I would like to test using FN_test. So far, so good. Thus I write a classe called EmbarkTest using googletest, declare a member Embark* and write inside the constructor: EmbarkTest() { e = new Embark(900,2010); } Then , F7 pressed, I get the following: 1>FN_test.obj : error LNK2019: unresolved external symbol "public: __thiscall Embark::Embark(int,int)" (??0Embark@@QAE@HH@Z) referenced in function "protected: __thiscall EmbarkTest::EmbarkTest(void)" (??0EmbarkTest@@IAE@XZ) 1>D:\Users\lg\Product\code\FN\Debug\FN_test.exe : fatal error LNK1120: 1 unresolved externals Does someone know what have I done wrong and/or what can I do to settle this?

    Read the article

  • Handling changes in an interface shared across multiple solutions?

    - by Anthony Mastrean
    Our "main" solution is the development code: shared libraries, services, UI projects, etc. The other solution is an integration and automated tests solution. It references several of the development projects. The reason it is separate is to avoid interference with the development solution's unit test VSMDI file. And to allow us to play with different execution methods (other test runners, like Gallio or StoryTeller) without interfering with the development solution. Recently, an interface changed in the development solution, one of our test mocks implemented that interface. But, it was not updated because there was no warning at compile time because it was in another solution. This broke our CI build. Does anyone have a similar setup? How do you handle these issues, do you follow a strict procedure or is there some kind of technical answer?

    Read the article

  • Linq to sql add/update in different methods with different datacontexts

    - by Kurresmack
    I have to methods, Add() and Update() which both create a datacontext and returns the object created/updated. In my unit test I call first Add(), do some stuff and then call Update(). The problem is that Update() fails with the exception: System.Data.Linq.DuplicateKeyException: Cannot add an entity with a key that is already in use.. I understand the issue but want to know what to do about it? I've read a bit about how to handle multiple datacontext objects and from what I've heard this way is OK. I understand that the entity is still attached to the datacontext in Add() but I need to find out how to solve this? Thanks in advance

    Read the article

  • UnitTest++ creates cmd windows, which can't be closed

    - by Simon
    Hello, I have a setup for using UnitTest++ like this in VS2008. Sometimes the cmd window, which shows the console output of the unit tests just hangs. I can move the window, resize and stuff, but I'm unable to close it. I see the window in the App tab of the Task Manager, but not in the Process tab, "Switch to process" doesn't work either. Stop debugging or closing VS is also no help, it seems VS has lost control over this window. If this cmd window is lost, I'm unable to shutdown my computer, which is pretty annoying Any hints?

    Read the article

  • using Autofac in a multi-layered architecture

    - by Kamyar
    I'm fairly new to the DI/IoC concept and would like to use Autofac in a 3-layered ASP.NET Webforms application. UI layer: An ASP.NET webforms website. BLL: Business logic layer which calls the repositories on DAL. DAL: .EDMX file (Entity Model) and ObjectContext with Repository classes which abstract the CRUD operations for each entity. Entities: The POCO Entities. Persistence Ignorant. Generated by Microsoft's ADO.Net POCO Entity Generator. I have asked a more general question here. Basically, I'd like to create an obejctcontext per HttpContext in my DAL. But i don't want to add a reference to DAL in UI or access to HttpContext in DAL directly. I guess this is where IoC tools come to play. The answer to my previous question is a very good example of using Windsor Castle. I'd like to use Autofac as my IoC tool and Don't know how to achieve this. (How to access DAL in application_start to register the component while I don't want to reference it in my UI, what are the proper references to be able to use DAL component in BLL with Autofac, Should I register BLL as a component with Autofac too) Sorry folks for not providing an explicit question and requesting a kind of working example, But I'm very unfamiliar to the whole IoC concept and I don't think I can achieve it to use in my current time-limited project.

    Read the article

  • autocommit and @Transactional and Cascading with spring, jpa and hibernate

    - by subes
    Hi, what I would like to accomplish is the following: have autocommit enabled so per default all queries get commited if there is a @Transactional on a method, it overrides the autocommit and encloses all queries into a single transaction, thus overriding the autocommit if there is a @Transactional method that calls other @Transactional annotated methods, the outer most annotation should override the inner annotaions and create a larger transaction, thus annotations also override eachother I am currently still learning about spring-orm and couldn't find documentation about this and don't have a test project for this yet. So my questions are: What is the default behaviour of transactions in spring? If the default differs from my requirement, is there a way to configure my desired behaviour? Or is there a totally different best practice for transactions? --EDIT-- I have the following test-setup: @javax.persistence.Entity public class Entity { @Id @GeneratedValue private Integer id; private String name; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } @Repository public class Dao { @PersistenceContext private EntityManager em; public void insert(Entity ent) { em.persist(ent); } @SuppressWarnings("unchecked") public List<Entity> selectAll() { List<Entity> ents = em.createQuery("select e from " + Entity.class.getName() + " e").getResultList(); return ents; } } If I have it like this, even with autocommit enabled in hibernate, the insert method does nothing. I have to add @Transactional to the insert or the method calling insert for it to work... Is there a way to make @Transactional completely optional?

    Read the article

  • Viewing Code Coverage Results outside of Visual studio

    - by Wonchance
    I've got some unit tests, and got some code coverage data. Now, I'd like to be able to view that code coverage data outside of visual studio, say in a web browser. But, when I export the code coverage to an xml file, I can't do anything with it. Are there readers out there for this? Do I have to write an xml parser and then display it how I want it (seems like a waste since visual studio already does this.) Seems kinda silly to have to take a screenshot of my code coverage results as my "report" Suggestions?

    Read the article

  • How does a java web project architecture look like without EJB3 ?

    - by Hendrik
    A friend and I are building a fairly complex website based on java. (PHP would have been more obvious but we chose for java because the educational aspect of this project is important to us) We have already decided to use JSF (with richfaces) for the front end and JPA for the backend and so far we have decided not to use EJB3 for the business layer. The reason we've decided not to use EJB3 is because - and please correct me if I am wrong - if we use EJB3 we can only run it on a full blown java application server like jboss and if we don't use EJB3 we can still run it on a lightweight server like tomcat. We want to keep speed and cost of our future web server in mind. So far i've worked on two JEE projects and both used the full stack with web business logic factories/persistence service entities with every layer a seperate module. Now here is my question, if you dont use EJB3 in the business logic layer. What does the layer look like? Please tell what is common practice when developing java web projects without ejb3? Do you think business logic layer can be thrown out altogether and have business logic in the backing beans? If you keep the layer, do you have all business methods static? Or do you initialize each business class as needed in the backing beans in every session as needed?

    Read the article

  • How should moq's VerifySet be called in VB.net

    - by Bender
    I am trying to test that a property has been set but when I write this as a unit test: moqFeed.VerifySet(Function(m) m.RowAdded = "Row Added") moq complains that "Expression is not a property setter invocation" My complete code is Imports Gallio.Framework Imports MbUnit.Framework Imports Moq <TestFixture()> Public Class GUI_FeedPresenter_Test Private moqFeed As Moq.Mock(Of IFeedView) <SetUp()> Sub Setup() moqFeed = New Mock(Of IFeedView) End Sub <Test()> Public Sub New_Presenter() Dim pres = New FeedPresenter(moqFeed.Object) moqFeed.VerifySet(Function(m) m.RowAdded = "Row Added") End Sub End Class Public Interface IFeedView Property RowAdded() As String End Interface Public Class FeedPresenter Private _FeedView As IFeedView Public Sub New(ByVal feedView As IFeedView) _FeedView = feedView _FeedView.RowAdded = "Row Added" End Sub End Class I can't find any examples of moq in VB, I would be grateful for any examples.

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • T4MVC not generating an action

    - by Maslow
    I suspected there was some hidden magic somewhere that stopped what looks like actual method calls all over the place in T4MVC. Then I had a view fail to compile, and the stackTrace went into my actual method. [Authorize] public string Apply(string shortName) { if (shortName.IsNullOrEmpty()) return "Failed alliance name was not transmitted"; if (Request.IsAuthenticated == false || User == null || User.Identity == null) return "Apply authentication failed"; Models.Persistence.AlliancePersistance.Apply(User.Identity.Name, shortName); return "Applied"; } So this method isn't generating in the template after all. <%=Ajax.ActionLink("Apply", "Apply", new RouteValueDictionary() { { "shortName", item.Shortname } }, new AjaxOptions() { UpdateTargetId = "masterstatus" })%> <%=Html.ActionLink("Apply",MVC.Alliance.Apply(item.Shortname),new AjaxOptions() { UpdateTargetId = "masterstatus" }) %> The second method threw an exception on compile because the method Apply in my controller has an [Authorize] attribute so that if someone that isn't logged on clicks this, they get redirected to login, then right back to this page. There they can click on apply again, this time being logged in. And yes I realize one is Ajax.ActionLink while the other is Html.ActionLink I did try them both with the T4MVC version.

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • How much of Grails GORM to test?

    - by Lloyd Meinholz
    Is there a "best practice" or defacto standard with how much of the GORM functionality one should test in the unit/functional tests? My take is that one should probably do most of the domain testing as functional tests so that you get the full grails environment. But what do you test? Inserts, updates, deletes? Do you test constraints even though they were probably more thoroughly tested by the grails release? Or do you just assume that GORM does what it is supposed to do and move to other parts of the application?

    Read the article

  • Some problem running NUnit

    - by prosseek
    I have NUnit installed at this directory. C:\Program Files\NUnit 2.5.5\bin\net-2.0 When I try to run my unit test (mut.dll) in some random directory. I get the following error. I have to copy the mut.dll under the NUnit directory in order to run it. ProcessModel: Default DomainUsage: Single Execution Runtime: net-2.0 Could not load file or assembly 'nunit.framework, Version=2.5.5.10112, Culture=n eutral, PublicKeyToken=96d09a1eb7f44a77' or one of its dependencies. The system cannot find the file specified. What's wrong? Is there anything that I have to configure to run NUNit under any directory?

    Read the article

  • Setting up gcov in Xcode 3.1

    - by Algorithmic
    I'm trying to setup my Xcode project to be instrumented with gcov so I can determine the code coverage of my unit tests. All of the documentation I find online talks about settings that I don't find in Xcode 3.1, though. An example: To work with Coverstory, first you need to set up your target to work with gcov. This requires turning on "Instrument Program Flow", "Generate Test Coverage Files" and linking with the gcov library. (Using Coverstory) The closest thing I can find to "Instrument Program Flow" and "Generate Test Coverage Files" in my build settings is "Generate Profiling Code", which doesn't appear to do what I want it to do. Am I looking in the wrong place for these settings or are all of the examples I'm finding online stale?

    Read the article

  • Simple object creation with DIY-DI?

    - by Runcible
    I recently ran across this great article by Chad Perry entitled "DIY-DI" or "Do-It-Yourself Dependency Injection". I'm in a position where I'm not yet ready to use a IoC framework, but I want to head in that direction. It seems like DIY-DI is a good first step. However, after reading the article, I'm still a little confused about object creation. Here's a simple example: Using manual constructor dependency injection (not DIY-DI), this is how one must construct a Hotel object: PowerGrid powerGrid; // only one in the entire application WaterSupply waterSupply; // only one in the entire application Staff staff; Rooms rooms; Hotel hotel(staff, rooms, powerGrid, waterSupply); Creating all of these dependency objects makes it difficult to construct the Hotel object in isolation, which means that writing unit tests for Hotel will be difficult. Does using DIY-DI make it easier? What advantage does DIY-DI provide over manual constructor dependency injection?

    Read the article

  • Passing an instantiated class to concrete class derived by Castle Windsor

    - by Tr1stan
    I have a system that I'm using to test some new architecture. I have the following setup (In MVC2 .Net - C Sharp): View < Controller < Service < Repository < DB I'm using Castle Windsor as my DI (IoC) controller, and this is working just fine in both the Service and Repo layers. However, I'm now at a point where I would like to pass an Entity Framework (DatabaseNameEntity) to the constructor to the Service, and then to the Repo, so that I have something similar to a Unit of Work pattern per request (This feels like what I'm trying to achieve anyway) - and I'm having trouble working out how this can be done using Castle Windsor. Am I going off on a silly tangent? Any pointers appreciated.

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • CSharpCodeProvider: Why is a result of compilation out of context when debugging

    - by epitka
    I have following code snippet that i use to compile class at the run time. //now compile the runner var codeProvider = new CSharpCodeProvider( new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); string[] references = new string[] { "System.dll", "System.Core.dll", "System.Core.dll" }; CompilerParameters parameters = new CompilerParameters(); parameters.ReferencedAssemblies.AddRange(references); parameters.OutputAssembly = "CGRunner"; parameters.GenerateInMemory = true; parameters.TreatWarningsAsErrors = true; CompilerResults result = codeProvider.CompileAssemblyFromSource(parameters, template); Whenever I step through the code to debug the unit test, and I try to see what is the value of "result" I get an error that name "result" does not exist in current context. Why?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >