Search Results

Search found 6522 results on 261 pages for 'spring integration'.

Page 85/261 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Gmail Now Supports Google Drive Integration; Share Files Up to 10GB

    - by Jason Fitzpatrick
    Gmail users can now easily send large files thanks to Google Drive’s increased integration with Gmail–blow through the 25MB in-email attachment limit and share files up to 10GB. From the official Gmail announcement: Have you ever tried to attach a file to an email only to find out it’s too large to send? Now with Drive, you can insert filesup to 10GB – 400 times larger than what you can send as a traditional attachment. Also, because you’re sending a file stored in the cloud, all your recipients will have access to the same, most-up-to-date version.  Like a smart assistant, Gmail will also double-check that your recipients all have access to any files you’re sending. This works like Gmail’s forgotten attachment detector: whenever you send a file from Drive that isn’t shared with everyone, you’ll be prompted with the option to change the file’s sharing settings without leaving your email. It’ll even work with Drive links pasted directly into emails.  The new Gmail/Drive integration is rolling out in waves to users over the next few days and is accessible via the new Gmail compose window. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • quartz throws deadlocks under load

    - by Khandelwal
    We are using Quartz with Spring and our configuration is throwing deadlocks when quartz has more than 1 thread configured. I'm starting to believe that it's because we don't have our quartz configured correctly with Spring, but I can't find enough documentation on how to configure the two to play nicely. We are running on both Windows and Linux environments - pointing at MSSQL and Oracle DBs. With both OS, using either DB, we can throw the following deadlock errors... We're consistently throwing these deadlock errors. We run under heavy load, inserting hundreds of quartz triggers in a matter of minutes. 2010-03-17 18:52:31,737 [] [] ERROR [DFScheduler_Worker-42] core.ErrorLogger core.ErrorLogger (QuartzScheduler.java:2185) - An error occured while marking executed job complete. job= 'BPM.6e41a6567f0000020100362a51dc7a50' org.quartz.JobPersistenceException: Couldn't remove trigger: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.] at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1469)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2978)at org.quartz.impl.jdbcjobstore.JobStoreSupport$39.execute(JobStoreSupport.java:2962) at org.quartz.impl.jdbcjobstore.JobStoreSupport$41.execute(JobStoreSupport.java:3713)at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3747) at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3709)at org.quartz.impl.jdbcjobstore.JobStoreSupport.triggeredJobComplete(JobStoreSupport.java:2958)at org.quartz.core.QuartzScheduler.notifyJobStoreJobComplete(QuartzScheduler.java:1727)at org.quartz.core.JobRunShell.run(JobRunShell.java:273) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534) Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Transaction (Process ID 87) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(Unknown Source at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(Unknown Source) at ... org.quartz.impl.jdbcjobstore.StdJDBCDelegate.deleteSimpleTrigger(StdJDBCDelegate.java:1820) at org.quartz.impl.jdbcjobstore.JobStoreSupport.deleteTriggerAndChildren(JobStoreSupport.java:1345 at org.quartz.impl.jdbcjobstore.JobStoreSupport.removeTrigger(JobStoreSupport.java:1453 ... 9 more

    Read the article

  • NetBeans generates JpaController with errors

    - by Xorty
    Hi, I am using NetBeans 6.8 for building Spring MVC application. Techonologies : Spring MVC 2.5 Derby DB Hibernate for ORM GlassFish v3 server I use New JPA Controller Classes from Entity Classes for adding ORM file. It is supposed to generate class for managing queries with my POJO files. Problem is, that NetBeans generates following code, and won't compile : public int getBrandCount() { EntityManager em = getEntityManager(); try { CriteriaQuery cq = em.getCriteriaBuilder().createQuery(); Root<Brand> rt = cq.from(Brand.class); cq.select(em.getCriteriaBuilder().count(rt)); Query q = em.createQuery(cq); return ((Long) q.getSingleResult()).intValue(); } finally { em.close(); } } At the picture, there is NetBeans error : It looks like method getCriteriaBuilder of Entity Manager Interface is unimplemented. Or some other reason why I can't use it in code. I don't know what other info should I provide, so please ask if anything comes to your mind. Thanks

    Read the article

  • Jetty startup delay

    - by Tauren
    I'm trying to figure out what would be causing a 1 minute delay in the startup of Jetty. Is it a configuration problem, my application, or something else? I have Jetty 7 (jetty-7.0.1.v20091125 25 November 2009) installed on a server and I deploy a 45MB ROOT.war file into the webapps directory. This is the only webapp configured in Jetty. I then start Jetty with the command: java -DSTOP.PORT=8079 -DSTOP.KEY=mystopkey -Denv=stage -jar start.jar etc/jetty-logging.xml etc/jetty.xml & I get two lines of output right after doing this: 2010-03-07 14:20:06.642:INFO::Logging to StdErrLog::DEBUG=false via org.eclipse.jetty.util.log.StdErrLog 2010-03-07 14:20:06.710:INFO::Redirecting stderr/stdout to /home/zing/jetty-distribution-7.0.1.v20091125/logs/2010_03_07.stderrout.log When I press the enter key, I get my command prompt back. Looking at the log file (logs/2010_03_07.stderrout.log), I see the following at the beginning: 2010-03-07 14:08:50.396:INFO::jetty-7.0.1.v20091125 2010-03-07 14:08:50.495:INFO::Extract jar:file:/home/zing/jetty-distribution-7.0.1.v20091125/webapps/ROOT.war!/ to /tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp 2010-03-07 14:08:52.599:INFO::NO JSP Support for , did not find org.apache.jasper.servlet.JspServlet 2010-03-07 14:09:51.379:INFO::Set web app root system property: 'webapp.root' = [/tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp] 2010-03-07 14:09:51.585:INFO::Initializing Spring root WebApplicationContext INFO - ContextLoader - Root WebApplicationContext: initialization started INFO - XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Sun Mar 07 14:09:51 PST 2010]; root of context hierarchy ... Notice the 1 minute long pause between the 3rd and 4th lines. What is Jetty doing at this point? What other things could be going on? It doesn't even look like it has started my Spring initialization yet. Note that I checked my /tmp directory to see if it was simply the time to unpack my war file, but the file had been completely unpacked even at the start of this 1 minute delay.

    Read the article

  • ActiveMQ broker configuration error when specifying persistenceAdapter: "One of '{WC[##other:"http:/

    - by Joe
    I am setting up a simple ActiveMQ embedded broker. It works fine, until I try to configure a persistence adapter. I am basically just copying the configuration from http://activemq.apache.org/persistence.html#Persistence-ConfiguringKahaPersistence. When I add this configuration to my Spring configuration, like so: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:amq="http://activemq.apache.org/schema/core" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.0.xsd"> <amq:broker useJmx="true" persistent="true" brokerName="localhost"> <amq:transportConnectors> <amq:transportConnector name="vm" uri="vm://localhost"/> </amq:transportConnectors> <amq:persistenceAdapter> <amq:kahaPersistenceAdapter directory="activemq-data" maxDataFileLength="33554432"/> </amq:persistenceAdapter> </amq:broker> </beans> I get the error: cvc-complex-type.2.4.a: Invalid content was found starting with element 'amq:persistenceAdapter'. One of '{WC[##other:"http://activemq.apache.org/schema/core"]}' is expected. When I take out the amq:persistenceAdapter element, it works fine. The same error happens no matter which persistence adapter I include in the body, e.g. jdbc, journal, etc. Any help would be greatly appreciated. Thanks.

    Read the article

  • How to solve with using Flex Builder 3 and BlazeDS?

    - by Teerasej
    Hi, everyone. Thank you for interesting in my question, I think you can help me out from this little problem. I am using Flex builder 3, BlazeDS, and Java with Spring and Hibernate framework. I using the remote object to load a string from spring's configuration files. But in testing, I found this fault event like this: RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null" I have check the configuration in remote-config.xml and services-config.xml. But it looks good. There is some people talked about this problem around the internet and I think you can help me and them. I am using these environment: Flex Builder 3 BlazeDS 3.2.0 JBoss server Full stacktrace: [RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:220] at mx.rpc::Responder/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\Responder.as:53] at mx.rpc::AsyncRequest/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AsyncRequest.as:103] at NetConnectionMessageResponder/statusHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:569] at mx.messaging::MessageResponder/status()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\MessageResponder.as:222]

    Read the article

  • Freemarker - lack of special character in email subject template cause email content crash

    - by freakman
    im fighting with strange error. Im using seperate freemarker templates for mail subject and body. It is sent using org.springframework.mail.javamail.JavaMailSender. Only templates that contains some special swedish character works in my application ( yes you read right... not the other way). If I delete it my email content crashes. It contains then: MIME-Version: 1.0 Content-Type: text/html;charset=UTF-8 Content-Transfer-Encoding: 7bit .. html code here .. My freemarker.properties file locale=sv_SE classic_compatible=false number_format= date_format=yyyy-MM-dd time_format=HH:mm datetime_format=yyyy-MM-dd HH:mm output_encoding=UTF-8 url_escaping_charset=UTF-8 auto_import=spring.ftl as spring auto_include= default_encoding=UTF-8 localized_lookup=true strict_syntax=true whitespace_stripping=true template_update_delay=10 Ive tried to convert subject file with dos2unix tool. Using 'find -bi subject.ftl' show that encoding is us-ascii. With added special character - utf-8. This thing is suprisingly strange for me... //SOLUTION: use :set bomb and save file in vim.

    Read the article

  • Why does Hibernate ignore the JPA2 standardized properties in my persistence.xml?

    - by Ophidian
    I have an extremely simple web application running in Tomcat using Spring 3.0.1, Hibernate 3.5.1, JPA 2, and Derby. I am defining all of my database connectivity in persistence.xml and merely using Spring for dependency injection. I am using embedded Derby as my database. Everything works correctly when I define the driver and url properties in persistence.xml in the classic Hibernate manner as thus: <property name="hibernate.connection.driver_class" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="hibernate.connection.url" value="jdbc:derby:webdb;create=true"/> The problems occur when I switch my configuration to the JPA2 standardized properties as thus: <property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="javax.persistence.jdbc.url" value="jdbc:derby:webdb;create=true"/> When using the JPA2 property keys, the application bails hard with the following exception: java.lang.UnsupportedOperationException: The user must supply a JDBC connection Does anyone know why this is failing? NOTE: I have copied the javax... property strings straight from the Hibernate reference documentation, so a typo is extremely unlikely.

    Read the article

  • Embedded Jetty resourceBase classpath URL

    - by drewzilla
    I'm embedding Jetty in a Spring based application. I configure my Jetty server in a Spring context file. The specific part of the configuration I'm having trouble with is this: <bean class="org.eclipse.jetty.webapp.WebAppContext"> <property name="contextPath" value="/" /> <property name="resourceBase" value="????????" /> <property name="parentLoaderPriority" value="true" /> </bean> If you see above, where I've put the ????????, I ideally want the resourceBase to reference a folder on my classpath. I'm deploying my application in a single executable JAR file and have a folder config/web/WEB-INF on my classpath. Jetty seems to be able to handle URLs defined in the resourceBase (e.g. jar:file:/myapp.jar!/config/web) but it doesn't seem to support classpath URLs. I get an IllegalArgumentException if I define something like classpath:config/web. This is a real pain for me. Does anyone know of anyway to achieve this functionality? Thanks, Andrew

    Read the article

  • Dependency Injection: How to maintain multiple configurations?

    - by Malax
    Hi StackOverflow, Lets assume we've build a system with a DI framework which is working quite fine. This system currently uses JMS to "talk" with other systems not maintained by us. The majority of our customers like the JMS approach and uses it according to our specification. The component which does all the messaging is injected with Spring into the rest of the application. Now we got the case that one customer cannot implement the JMS solution and want to use another messaging technology. Thats not a problem because we can simply implement a messaging service using this technology and inject it in the rest of the application. But how are we supposed to handle the deployment and maintenance of the configuration? Since the application uses Spring i could imagine to check in all the configurations i have for this application and the system administrator could start the application and passing the name of the DI XML file to specify which configuration should be loaded. But... it just don't feel right. Are there any solutions for such cases available? What are the best-practices you use? I could even imagine more complex scenarios which do not contain only one service substitution... Thanks a lot!

    Read the article

  • How do I pass parameters between request-scoped beans

    - by smayers81
    This is a question that has been bothering me for sometime. My application uses ICEFaces for our UI framework and Spring 2.5 for Dependency Injection. In addition, Spring actually maintains all of our backing beans, not the ICEFaces framework, so our faces-config is basically empty. Navigation is not even really handled through navigation-rules. We perform manual redirects to new windows using window.open. All of our beans are defined in our appContext file as being request-scoped. I have Page ABC which is backed by BackingBeanABC. Inside that backing bean, I have a parameter say: private Order order; I then have Page XYZ backed by BackingBeanXYZ. When I redirect from page ABC to page XYZ, I want to transfer the 'order' property from ABC to XYZ. The problem is since everything is request-scoped and I'm performing a redirect, I am losing the value of 'description'. There has got to be an easier way to pass objects between beans in request scope during a redirect. Can anyone assist with this issue?

    Read the article

  • generating and unmarshalling java classes while unmarshalling input contains a DTD

    - by Hans Westerbeek
    Hi, For a Spring-based project, I have the following situation to solve: I have XML files coming in whose contents I will have to parse at runtime. Those XML files come with a DTD reference. I need to generate the classes that the unmarshaller churns out using the right at build time, using the Maven2 plugin for the unmarshalling library of choice. This is also not very hard to do, once I have generated an XSD from the DTD. I want to use spring-oxm's UnMarshaller interface to do the unmarshalling at runtime. This I understand how to do. The xml files come in with a DTD reference, and all unmarshalling libraries out there want to do unmarshalling based on an XSD. Now, as described in the castor documentation, I can convert the DTD to an XSD and keep it on the classpath. However, when an actual XML file comes into the system it will still have that DTD reference at the top, and there's nothing I can really do about that (except for string replacing which feels hacky in this case). Will this cause the unmarshaller, like Castor to fail? Am I right in suspecting that this DTD reference will cause the unmarshalling to fail? Could I do pure DTD-based unmarshalling? Or can this somehow be prevented by providing detailed configuration to the unmarshaller? Until now, I have tried castor, xmlbeans and xstream. Which would fit my purposes best? Has anyone else been in this situation? Did you also end up just doing manual DOM or SAX parsing?

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • Career Choice in JEE, are EJBs standard?

    - by John Baker
    I have chosen to go the JEE route for a career path but I have been having a hard time determining which core technologies that I need to be most familiar with. I'm at the point I can write web apps without a problem with JSP's and regular servlets using JDBC or some basic Hibernate stuff (I know, HTML, CSS and have used MVC extensively on a number of different platforms). What I'm trying to find out is if there is some standard as far as J2EE technologies go. When I look at most of the Job listings, occasionally you will see someone mention Struts or Spring but rarely do I see any mention of EJB's. So my question is really, are EJB's basically required by most JEE employers? Or are most of them working with POJO's? Is it a mix? I have a hard time figuring out if I should put the majority of my time into Struts, Spring/Hibernate, EJB's, etc. And if I do need to master EJB's what version should I learn? 2.1 or 3.0. 3.0 has some obviously better features but I figure a lot of companies probably chose to write their apps in 2.1 just because it was the standard of the time and now migrating would be a big deal. Any advice on this is greatly appreciated.

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Ability to switch Persistence Unit dynamically within the application (JPA)

    - by MVK
    My application data access layer is built using Spring and EclipseLink and I am currently trying to implement the following feature - Ability to switch the current/active persistence unit dynamically for a user. I tried various options and finally ended up doing the following. In the persistence.xml, declare multiple PUs. Create a class with as many EntityManagerFactory attributes as there are PUs defined. This will act as a factory and return the appropriate EntityManager based on my logic public class MyEntityManagerFactory { @PersistenceUnit(unitName="PU_1") private EntityManagerFactory emf1; @PersistenceUnit(unitName="PU_2") private EntityManagerFactory emf2; public EntityManager getEntityManager(int releaseId) { // Logic goes here to return the appropriate entityManeger } } My spring-beans xml looks like this.. <!-- First persistence unit --> <bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="emFactory1"> <property name="persistenceUnitName" value="PU_1" /> </bean> <bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager1"> <property name="entityManagerFactory" ref="emFactory1"/> </bean> <tx:annotation-driven transaction-manager="transactionManager1"/> The above section is repeated for the second PU (with names like emFactory2, transactionManager2 etc). I am a JPA newbie and I know that this is not the best solution. I appreciate any assistance in implementing this requirement in a better/elegant way! Thanks!

    Read the article

  • How to find all initializations of instance variables in a Java package?

    - by Hank Gay
    I'm in the midst of converting a legacy app to Spring. As part of the transition, we're converting our service classes from an "instantiate new ones whenever you need one" style to a Springleton style, so I need a way to make sure they don't have any state. I'm comfortable on the *nix command-line, and I have access to IntelliJ (this strikes me as a good fit for Structural Search and Replace, if I could figure out how to use it), and I could track down an Eclipse install, if that would help. I just want to make absolutely sure I've found all the possible problems. UPDATE: Sorry for the confusion. I don't have a problem finding places where the old constructor was being called. What I'm looking for is a "bullet-proof" why to search all 100+ service classes for any sort of internal state. The most obvious one I could think of (and the only one I've really found so far) is cases where we use memoization in the classes, so they have instance variables that get initialized internally instead of via Spring. This means that when the same Springleton gets used for different requests, data can leak between them. Thanks.

    Read the article

  • What would be a better implementation of shared variable among subclass

    - by Churk
    So currently I have a spring unit testing application. And it requires me to get a session cookie from a foreign authentication source. Problem what that is, this authentication process is fairly expensive and time consuming, and I am trying to create a structure where I am authenticate once, by any subclass, and any subsequent subclass is created, it will reuse this session cookie without hitting the authentication process again. My problem right now is, the static cookie is null each time another subclass is created. And I been reading that using static as a global variable is a bad idea, but I couldn't think of another way to do this because of Spring framework setting things during run time and how I would set the cookie so that all other classes can use it. Another piece of information. The variable is being use, but is change able during run time. It is not a single user being signed in and used across the board. But more like a Sub1 would call login, and we have a cookie. Then multiple test will be using that login until SubX will come in and say, I am using different credential, so I need to login as something else. And repeats. Here is a outline of my code: public class Parent implements InitializingBean { protected static String BASE_URL; public static Cookie cookie; ... All default InitializingBean methods ... afterPropertiesSet() { cookie = // login process returns a cookie } } public class Sub1 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomething() { // Do something with cookie, because it should have been set by parent class } } public class Sub2 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomethingElse() { // Do something with cookie, because it should have been set by parent class } }

    Read the article

  • How to model and handle presentation DTO's to abstract from complicated domain model?

    - by arrages
    Hi I am developing an application that needs to work with a complex domain model using Hibernate. This application uses Spring MVC and using the domain objects in the presentation layer is very messy so I think I should use DTO's that go to and from my service layer so that these match what I need in my views. Now lets assume I have a CarLease entity whose properties are not simple java primitives but it's composed with other entities like Make, Model, etc public class CarLease { private Make make; Private Model model; . . . } most properties are in this fashion and they are selectable using drop down selects on the jsp view, each will post back an ID to the controller. Now considering some standard use cases: create, edit, display How would you go about modeling the presentation DTO's to be used as form backing objects and communication between presentation and service layers?? Would you create a different DTO for each case (create, edit, display), would you make DTO's for the complex attributes? if so where would you translate the ID to entity? how and where would you handle validation, DTO/Domain assembly, what would you return from service layer methods? (create, edit, get) As you can see, I now I will benefit by separating my view from the domain objects (very complex with lots of stuff I don't need.) but I am having a hard time finding any real world examples and best practices for this. I need some architecture guidance from top to bottom, please keep in mind I will use Spring MVC in case that may leverage on your anwser. thanks in advance.

    Read the article

  • Job queueing and execute Mechanism

    - by Calm Storm
    In my webservice all method calls submits jobs to a queue. Basically these operations take long time to execute, so all these operations submit a Job to a queue and return a status saying "Submitted". Then the client keeps polling using another service method to check for the status of the job. Presently, what I do is create my own Queue, Job classes that are Serializable and persist these jobs (i.e, their serialized byte stream format) into the database. So an UpdateLogistics operation just queues up a "UpdateLogisticsJob" to the queue and returns. I have written my own JobExecutor which wakes up every N seconds, scans the database table for any existing jobs, and executes them. Note the jobs have to persisted because these jobs have to survive app-server crashes. This was done a long time ago, and I used bespoke classes for my Queues, Jobs, Executors etc. But now, I would like to know has someone done something similar before? In particular, Are there frameworks available for this ? Something in Spring/Apache etc Any framework that is easy to adapt/debug and plays well along with libraries like Spring will be great.

    Read the article

  • What is the meaning of @ModelAttribute annotation at method argument level?

    - by beemaster
    Spring 3 reference teaches us: When you place it on a method parameter, @ModelAttribute maps a model attribute to the specific, annotated method parameter I don't understand this magic spell, because i sure that model object's alias (key value if using ModelMap as return type) passed to the View after executing of the request handler method. Therefore when request handler method executes the model object's name can't be mapped to the method parameter. To solve this contradiction i went to stackoverflow and found this detailed example. The author of example said: // The "personAttribute" model has been passed to the controller from the JSP It seems, he is charmed by Spring reference... To dispel the charms i deployed his sample app in my environment and cruelly cut @ModelAttribute annotation from method MainController.saveEdit. As result the application works without any changes! So i conclude: the @ModelAttribute annotation is not needed to pass web form's field values to the argument's fields. Then i stuck to the question: what is the mean of @ModelAttribute annotation? If the only mean is to set alias for model object in View, then why this way better than explicitly adding of object to ModelMap?

    Read the article

  • best way to avoid sql injection

    - by aauser
    I got similar domain model 1) User. Every user got many cities. @OneToMany(targetEntity=adv.domain.City.class...) 2) City. Every city got many districts @OneToMany(targetEntity=adv.domain.Distinct.class) 3) Distintc My goal is to delete distinct when user press delete button in browser. After that controller get id of distinct and pass it to bussiness layer. Where method DistinctService.deleteDistinct(Long distinctId) should delegate deliting to DAO layer. So my question is where to put security restrictions and what is the best way to accomplish it. I want to be sure that i delete distinct of the real user, that is the real owner of city, and city is the real owner of distinct. So nobody exept the owner can't delete ditinct using simple url like localhost/deleteDistinct/5. I can get user from httpSession in my controller and pass it to bussiness layer. After that i can get all cities of this user and itrate over them to be sure, that of the citie.id == distinct.city_id and then delete distinct. But it's rather ridiculous in my opinion. Also i can write sql query like this ... delete from t_distinct where t_distinct.city_id in (select t_city.id from t_city left join t_user on t_user.id = t_city.owner_id where t_user.id = ?) and t_distinct.id = ? So what is the best practice to add restrictions like this. I'm using Hibernate, Spring, Spring MVC by the way.. Thank you

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >