Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 35/95 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Problem using Hibernate-Search

    - by KCore
    Hi, I am using hibernate search for my application. It is well configured and running perfectly till some time back, when it stopped working suddenly. The reason according to me being the number of my model (bean) classes. I have some 90 classes, which I add to my configuration, while building my Hibernate Configuration. When, I disable hibernate search (remove the search annotations and use Configuration instead of AnnotationsConfiguration), I try to start my application, it Works fine. But,the same app when I enable search, it just hangs up. I tried debugging and found the exact place where it hangs. After adding all the class to my AnnotationsConfiguration object, when I say cfg.buildSessionfactory(), It never comes out of that statement. (I have waited for hours!!!) Also when I decrease the number of my model classes (like say to half i.e. 50) it comes out of that statement and the application works fine.. Can Someone tell why is this happening?? My versions of hibernate are: hibernate-core-3.3.1.GA.jar hibernate-annotations-3.4.0.GA.jar hibernate-commons-annotations-3.1.0.GA.jar hibernate-search-3.1.0.GA.jar Also if need to avoid using AnnotationsConfiguration, I read that I need to configure the search event listeners explicitly.. can anyone list all the neccessary listeners and their respective classes? (I tried the standard ones given in Hibernate Search books, but they give me ClassNotFound exception and I have all the neccesarty libs in classpath) Here are the last few lines of hibernate trace I managed to pull : 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:33,716 INFO SettingsFactory:181 - JDBC batch size: 15 16:09:33,719 INFO SettingsFactory:184 - JDBC batch updates for versioned data: disabled 16:09:33,721 INFO SettingsFactory:189 - Scrollable result sets: enabled 16:09:33,723 DEBUG SettingsFactory:193 - Wrap result sets: disabled 16:09:33,725 INFO SettingsFactory:197 - JDBC3 getGeneratedKeys(): enabled 16:09:33,727 INFO SettingsFactory:205 - Connection release mode: auto 16:09:33,730 INFO SettingsFactory:229 - Maximum outer join fetch depth: 2 16:09:33,732 INFO SettingsFactory:232 - Default batch fetch size: 1000 16:09:33,735 INFO SettingsFactory:236 - Generate SQL with comments: disabled 16:09:33,737 INFO SettingsFactory:240 - Order SQL updates by primary key: disabled 16:09:33,740 INFO SettingsFactory:244 - Order SQL inserts for batching: disabled 16:09:33,742 INFO SettingsFactory:420 - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 16:09:33,744 INFO ASTQueryTranslatorFactory:47 - Using ASTQueryTranslatorFactory 16:09:33,747 INFO SettingsFactory:252 - Query language substitutions: {} 16:09:33,750 INFO SettingsFactory:257 - JPA-QL strict compliance: disabled 16:09:33,752 INFO SettingsFactory:262 - Second-level cache: enabled 16:09:33,754 INFO SettingsFactory:266 - Query cache: disabled 16:09:33,757 INFO SettingsFactory:405 - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 16:09:33,759 INFO RegionFactoryCacheProviderBridge:61 - Cache provider: net.sf.ehcache.hibernate.EhCacheProvider 16:09:33,762 INFO SettingsFactory:276 - Optimize cache for minimal puts: disabled 16:09:33,764 INFO SettingsFactory:285 - Structured second-level cache entries: disabled 16:09:33,766 INFO SettingsFactory:314 - Statistics: disabled 16:09:33,769 INFO SettingsFactory:318 - Deleted entity synthetic identifier rollback: disabled 16:09:33,771 INFO SettingsFactory:333 - Default entity-mode: pojo 16:09:33,774 INFO SettingsFactory:337 - Named query checking : enabled 16:09:33,869 INFO Version:20 - Hibernate Search 3.1.0.GA 16:09:35,134 DEBUG DocumentBuilderIndexedEntity:157 - Field selection in projections is set to false for entity **com.xyz.abc**. recognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernateDocumentBuilderIndexedEntity Donno what the last line indicates ??? (hibernaterecognized....) After the last line it doesnt do anything (no trace too ) and just hangs....

    Read the article

  • Mercurial hook to disallow committing large binary files

    - by hekevintran
    I want to have a Mercurial hook that will run before committing a transaction that will abort the transaction if a binary file being committed is greater than 1 megabyte. I found the following code which works fine except for one problem. If my changeset involves removing a file, this hook will throw an exception. The hook (I'm using pretxncommit = python:checksize.newbinsize): from mercurial import context, util from mercurial.i18n import _ import mercurial.node as dpynode '''hooks to forbid adding binary file over a given size Ensure the PYTHONPATH is pointing where hg_checksize.py is and setup your repo .hg/hgrc like this: [hooks] pretxncommit = python:checksize.newbinsize pretxnchangegroup = python:checksize.newbinsize preoutgoing = python:checksize.nopull [limits] maxnewbinsize = 10240 ''' def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) tip = context.changectx(repo, 'tip').rev() ctx = context.changectx(repo, node) for rev in range(ctx.rev(), tip+1): ctx = context.changectx(repo, rev) print ctx.files() for f in ctx.files(): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid The exception: $ hg commit -m 'commit message' error: pretxncommit hook raised an exception: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest transaction abort! rollback completed abort: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest! I'm not familiar with writing Mercurial hooks, so I'm pretty confused about what's going on. Why does the hook care that a file was removed if hg already knows about it? Is there a way to fix this hook so that it works all the time? Update (solved): I modified the hook to filter out files that were removed in the changeset. def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) ctx = repo[node] for rev in xrange(ctx.rev(), len(repo)): ctx = context.changectx(repo, rev) # do not check the size of files that have been removed # files that have been removed do not have filecontexts # to test for whether a file was removed, test for the existence of a filecontext filecontexts = list(ctx) def file_was_removed(f): """Returns True if the file was removed""" if f not in filecontexts: return True else: return False for f in itertools.ifilterfalse(file_was_removed, ctx.files()): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid

    Read the article

  • What's the best Communication Pattern for EJB3-based applications?

    - by Hank
    I'm starting a JEE project that needs to be strongly scalable. So far, the concept was: several Message Driven Beans, responsible for different parts of the architecture each MDB has a Session Bean injected, handling the business logic a couple of Entity Beans, providing access to the persistence layer communication between the different parts of the architecture via Request/Reply concept via JMS messages: MDB receives msg containing activity request uses its session bean to execute necessary business logic returns response object in msg to original requester The idea was that by de-coupling parts of the architecture from each other via the message bus, there is no limit to the scalability. Simply start more components - as long as they are connected to the same bus, we can grow and grow. Unfortunately, we're having massive problems with the request-reply concept. Transaction Mgmt seems to be in our way plenty. It seams that session beans are not supposed to consume messages?! Reading http://blogs.sun.com/fkieviet/entry/request_reply_from_an_ejb and http://forums.sun.com/message.jspa?messageID=10338789, I get the feeling that people actually recommend against the request/reply concept for EJBs. If that is the case, how do you communicate between your EJBs? (Remember, scalability is what I'm after) Details of my current setup: MDB 1 'TestController', uses (local) SLSB 1 'TestService' for business logic TestController.onMessage() makes TestService send a message to queue XYZ and requests a reply TestService uses Bean Managed Transactions TestService establishes a connection & session to the JMS broker via a joint connection factory upon initialization (@PostConstruct) TestService commits the transaction after sending, then begins another transaction and waits 10 sec for the response Message gets to MDB 2 'LocationController', which uses (local) SLSB 2 'LocationService' for business logic LocationController.onMessage() makes LocationService send a message back to the requested JMSReplyTo queue Same BMT concept, same @PostConstruct concept all use the same connection factory to access the broker Problem: The first message gets send (by SLSB 1) and received (by MDB 2) ok. The sending of the returning message (by SLSB 2) is fine as well. However, SLSB 1 never receives anything - it just times out. I tried without the messageSelector, no change, still no receiving message. Is it not ok to consume message by a session bean? SLSB 1 - TestService.java @Resource(name = "jms/mvs.MVSControllerFactory") private javax.jms.ConnectionFactory connectionFactory; @PostConstruct public void initialize() { try { jmsConnection = connectionFactory.createConnection(); session = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); System.out.println("Connection to JMS Provider established"); } catch (Exception e) { } } public Serializable sendMessageWithResponse(Destination reqDest, Destination respDest, Serializable request) { Serializable response = null; try { utx.begin(); Random rand = new Random(); String correlationId = rand.nextLong() + "-" + (new Date()).getTime(); // prepare the sending message object ObjectMessage reqMsg = session.createObjectMessage(); reqMsg.setObject(request); reqMsg.setJMSReplyTo(respDest); reqMsg.setJMSCorrelationID(correlationId); // prepare the publishers and subscribers MessageProducer producer = session.createProducer(reqDest); // send the message producer.send(reqMsg); System.out.println("Request Message has been sent!"); utx.commit(); // need to start second transaction, otherwise the first msg never gets sent utx.begin(); MessageConsumer consumer = session.createConsumer(respDest, "JMSCorrelationID = '" + correlationId + "'"); jmsConnection.start(); ObjectMessage respMsg = (ObjectMessage) consumer.receive(10000L); utx.commit(); if (respMsg != null) { response = respMsg.getObject(); System.out.println("Response Message has been received!"); } else { // timeout waiting for response System.out.println("Timeout waiting for response!"); } } catch (Exception e) { } return response; } SLSB 2 - LocationService.Java (only the reply method, rest is same as above) public boolean reply(Message origMsg, Serializable o) { boolean rc = false; try { // check if we have necessary correlationID and replyTo destination if (!origMsg.getJMSCorrelationID().equals("") && (origMsg.getJMSReplyTo() != null)) { // prepare the payload utx.begin(); ObjectMessage msg = session.createObjectMessage(); msg.setObject(o); // make it a response msg.setJMSCorrelationID(origMsg.getJMSCorrelationID()); Destination dest = origMsg.getJMSReplyTo(); // send it MessageProducer producer = session.createProducer(dest); producer.send(msg); producer.close(); System.out.println("Reply Message has been sent"); utx.commit(); rc = true; } } catch (Exception e) {} return rc; } sun-resources.xml <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerRequest" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerRequestQueue"/> </admin-object-resource> <admin-object-resource enabled="true" jndi-name="jms/mvs.LocationControllerResponse" res-type="javax.jms.Queue" res-adapter="jmsra"> <property name="Name" value="mvs.LocationControllerResponseQueue"/> </admin-object-resource> <connector-connection-pool name="jms/mvs.MVSControllerFactoryPool" connection-definition-name="javax.jms.QueueConnectionFactory" resource-adapter-name="jmsra"/> <connector-resource enabled="true" jndi-name="jms/mvs.MVSControllerFactory" pool-name="jms/mvs.MVSControllerFactoryPool" />

    Read the article

  • NHibernate mapping error SQL Server 2008 Express

    - by developer
    Hi All, I tried an example from NHibernate in Action book and when I try to run the app, it throws an exception saying "Could not compile the mapping document: HelloNHibernate.Employee.hbm.xml" Below is my code, Employee.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="HelloNHibernate.Employee, HelloNHibernate" lazy="false" table="Employee"> <id name="id" access="field"> <generator class="native"/> </id> <property name="name" access="field" column="name"/> <many-to-one access="field" name="manager" column="manager" cascade="all"/> </class> Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using NHibernate; using System.Reflection; using NHibernate.Cfg; namespace HelloNHibernate { class Program { static void Main(string[] args) { CreateEmployeeAndSaveToDatabase(); UpdateTobinAndAssignPierreHenriAsManager(); LoadEmployeesFromDatabase(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } static void CreateEmployeeAndSaveToDatabase() { Employee tobin = new Employee(); tobin.name = "Tobin Harris"; using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(tobin); transaction.Commit(); } Console.WriteLine("Saved Tobin to the database"); } } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static void LoadEmployeesFromDatabase() { using (ISession session = OpenSession()) { IQuery query = session.CreateQuery("from Employee as emp order by emp.name asc"); IList<Employee> foundEmployees = query.List<Employee>(); Console.WriteLine("\n{0} employees found:", foundEmployees.Count); foreach (Employee employee in foundEmployees) Console.WriteLine(employee.SayHello()); } } static void UpdateTobinAndAssignPierreHenriAsManager() { using (ISession session = OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { IQuery q = session.CreateQuery("from Employee where name='Tobin Harris'"); Employee tobin = q.List<Employee>()[0]; tobin.name = "Tobin David Harris"; Employee pierreHenri = new Employee(); pierreHenri.name = "Pierre Henri Kuate"; tobin.manager = pierreHenri; transaction.Commit(); Console.WriteLine("Updated Tobin and added Pierre Henri"); } } } static ISessionFactory factory; } } Employee.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace HelloNHibernate { class Employee { public int id; public string name; public Employee manager; public string SayHello() { return string.Format("'Hello World!', said {0}.", name); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler,NHibernate"/> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string"> Data Source=SQLEXPRESS2008;Integrated Security=True; User ID=SQL2008;Password=;initial catalog=HelloNHibernate </property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="show_sql">false</property> </session-factory> </hibernate-configuration> </configuration>

    Read the article

  • Cheetah pre-compiled template usage quesion

    - by leo
    For performance reason as suggested here, I am studying how to used the pr-compiled template. I edit hello.tmpl in template directory as #attr title = "This is my Template" \${title} Hello \${who}! then issued cheetah-compile.exe .\hello.tmpl and get the hello.py In another python file runner.py , i have !/usr/bin/env python from Cheetah.Template import Template from template import hello def myMethod(): tmpl = hello.hello(searchList=[{'who' : 'world'}]) results = tmpl.respond() print tmpl if name == 'main': myMethod() But the outcome is ${title} Hello ${who}! Debugging for a while, i found that inside hello.py def respond(self, trans=None): ## CHEETAH: main method generated for this template if (not trans and not self._CHEETAH__isBuffering and not callable(self.transaction)): trans = self.transaction # is None unless self.awake() was called if not trans: trans = DummyTransaction() it looks like the trans is None, so it goes to DummyTransaction, what did I miss here? Any suggestions to how to fix it?

    Read the article

  • SQL Queries for Creating a rollback point and to rollback to that specific point

    - by Santhosha
    Hi, As per my project requirement i want to perform two operation Password Change Unlock Account(Only unlocking account, no password change!) I want return success only if both the transactions succeeds. Say if password change succeeds and unlock fails i cannot send success or failure. So i want to create a rollback point before password change, if both queries executes successfully i will commit the transaction. If one of the query fails i will discard the changes by rolling back to the rollback point. I am doing this in C++ using ADO. Is there any SQL Queries,using i can create the rollback point and reverting to rollback point and commiting the transaction I am using below commands for Password change ALTER LOGIN [username] WITH PASSWORD = N'password' for Unlock account ALTER LOGIN [%s] WITH CHECK_POLICY = OFF ALTER LOGIN [%s] WITH CHECK_POLICY = ON Thanks in advance!! Santhosh

    Read the article

  • EJB3 with Spring

    - by fish
    I have understood that if I use EJB in Spring context, I get all the same benefits as if I was using it in "pure" EJB3 environment, is this true? I have googled but can't find a definitive, clear answer. For example, let's say I have a session bean that updates some tables in the database and it throws a System Exception. In "pure" EJB3 environment the transaction is rolled back. What if I for example @Autowire this bean using Spring, does Spring take care of the transaction handling same way as does the EJB3 container? Or what? Does it maybe require some specific configuration or is it fully "automatic"?

    Read the article

  • FakeTable with tSQLt remove triggers

    - by user1454695
    I have jsut started to use tSQLt and is about to test a trigger. I call the FakeTable procedure and do my test but the trigger is not executed. If don't use FakeTable the trigger is executed. That seems to be really bad and I canät find any info that there is any method to readded them. Then I thought the triggers are removed by FakeTable but I can recreate them after the call and did the following code in my test: DECLARE @createTrigger NVARCHAR(MAX); SELECT @createTrigger = OBJECT_DEFINITION(OBJECT_ID('MoveDataFromAToB')) EXEC tSQLt.FakeTable 'dbo.A'; EXEC(@createTrigger); I got the following error: "There is already an object named 'MoveDataFromAToB' in the database.{MoveDataFromAToB,14} (There was also a ROLLBACK ERROR -- The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.{Private_RunTest,60})" Anyone that have any experience with tSQLt and know anyworkaround for this problem?

    Read the article

  • Linked Server related

    - by rmdussa
    I have two instances of SQL Server: Server1 (SQL Server 2008) Server2 (SQL Server 2005) I am executing a stored procedure from Server1 which references tables on Server2. It is working fine in my test environment: Server1 runs Vista SP2, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. However, it is not working in the production environment: Server1 runs Vista SP1, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. The error message I receive is: OLE DB provider "SQLNCLI10" for linked server "Server2" returned message "No transaction is active.". Msg 7391, Level 16, State 2, Line 21 The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "Server2" was unable to begin a distributed transaction.

    Read the article

  • DROP TABLE fails for temp table

    - by StarBright
    I have a client application that creates a temp table, the performs a bulk insert into the temp table, then executes some SQL using the table before deleting it. Pseudo-code: open connection begin transaction CREATE TABLE #Temp ([Id] AS int) bulk insert 500 rows into #Temp UPDATE [OtherTable] SET [Status]=0 WHERE [Id] IN (SELECT [Id] FROM #Temp) AND [Group]=1 DELETE FROM #Temp WHERE [Id] IN (SELECT [Id] FROM [OtherTable] WHERE [Group]=1) INSERT INTO [OtherTable] ([Group], [Id]) SELECT 1 as [Group], [DocIden] FROM #Temp DROP TABLE #Temp COMMIT TRANSACTION CLOSE CONNECTION This is failing with an error on the DROP statement: Cannot drop the table '#Temp', because it does not exist or you do not have permission. I can't imagine how this failure could occur without something else going on first, but I don't see any other failures occurring before this. Is there anything that I'm missing that could be causing this to happen?

    Read the article

  • Possible uncommitted transactions causing "System.Data.SqlClient.SqlException: Timeout expired" erro

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • Deadlock error in INSERT statement

    - by Gnanam
    We've got a web-based application. There is a time-bound database operation (INSERTs and UPDATEs) in the application which takes more time to complete, hence this particular flow has been changed into Java Thread so it will not wait (block) for the complete database operation to be completed. My problem is, if more than 1 user come across this particular flow, I'm facing the following error thrown by PostgreSQL: org.postgresql.util.PSQLException: ERROR: deadlock detected Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566. Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560. Above error is consistently thrown in INSERT statements. Additional Information: 1) I have PRIMARY KEY defined in this table. 2) There are FOREIGN KEY references in this table. 3) Separate database connection is passed to each Java Thread. Technologies Web Server: Tomcat v6.0.10 Java v1.6.0 Servlet Database: PostgreSQL v8.2.3 Connection Management: pgpool II

    Read the article

  • How to create JPA EntityMananger in Spring Context?

    - by HDave
    I have a JPA/Spring application that uses Hibernate as the JPA provider. In one portion of the code, I have to manually create a DAO in my application with the new operator rather than use Spring DI. When I do this, the @PersistenceContext annotation is not processed by Spring. In my code where I create the DAO I have an EntityManagerFactory which I used to set the entityManager as follows: @PersistenceUnit private EntityManagerFactory entityManagerFactory; MyDAO dao = new MyDAOImpl(); dao.setEntityManager(entityManagerFactory.createEntityManager()); The problem is that when I do this, I get a Hibernate error: Could not find UserTransaction in JNDI [java:comp/UserTransaction] It's as if the HibernateEntityManager never received all the settings I've configured in Spring: <bean id="myAppTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> </props> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <!-- The following use the PropertyPlaceholderConfigurer but it doesn't work in Eclipse --> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> I am not sure, but I think the issue might be that I am not creating the entity manager correctly with the plain vanilla createEntityManager() call rather than passing in a map of properties. I say this because when Spring's LocalContainerEntityManagerFactoryBean proxy's the call to Hibernate's createEntityManager() all of the Spring configured options are missing. That is, there is no Map argument to the createEntityManager() call. Perhaps it is another problem entirely however....not sure!

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • What to name a method

    - by coffeeaddict
    I'm debating what to name this method. CloseCashTransaction(Cash.Id, -1, true); or CompleteCyberCashTransaction(Cash.Id, -1, true); or neither are good? In business terms/process by sending in these 3 values I'm essentially "closing the transaction" or "completing the transaction" in our workflow. However on the developer side, I cant' infer wtf "Complete" or "Close" means. It forces me to look into the internals of the method. My struggle is that I try to name methods to infer what they are doing. Complete is just way too general and forces the consumer of the method to dive into the code every time I use words like this. When I see stuff like this all over code, I have to take so much time to figure out what they are actually doing. And if the comments suck, I end up having to look at all logic in that method because the comments nor the method name really infer what's going on.

    Read the article

  • Spring WS & Validator interceptor

    - by mada
    I have a endpoint mapping a webservice which is used to insert in the dabatabase some keywords: @Transactional(readOnly = false,isolation= Isolation.SERIALIZABLE) public Source saveKW(...). The input is a request. I would like to add an interceptor on the method in order to validate the parameters. this one will read some values from the DB. i would like that this interceptor is EMBED in the transaction declared for the endpoint (or this opposite). In other words, i want them to be in the same transaction. Ideally im looking for something like this with annotation: @Transactional(readOnly = false,isolation= Isolation.SERIALIZABLE) @validator("KeyWordValidaor.class") public Source saveKW(...) where KeyWordValidaor will be class validating the parameters. Have you any idea or short examples to implements this like this way or in a other real way?

    Read the article

  • Inherited properties aren't bound during databinding

    - by Josh
    I have two interfaces, IAuditable and ITransaction. public interface IAuditable{ DateTime CreatedOn { get; } string CreatedBy { get; } } public interface ITransaction : IAuditable { double Amount{ get; } } And a class that implements ITransaction, call Transaction. public class Transaction : ITransaction{ public DateTime CreatedOn { get { return DateTime.Now; } } public string CreatedBy { get { return "aspnet"; } } public doubl Amount { get { return 0; } } } When I bind a list of ITransactions to a datagrid and use auto create columns, only the Amount gets bound. The CreatedBy and CreatedOn are not seen. Is there a way I can get these values visible during databinding?

    Read the article

  • SubSonic: MySqlDataReader closes connection.

    - by SchlaWiener
    I am using SubSonic 2.1 and entcountered a problem while executing a Transaction with SharedDbConnectionScope and TransactionScope. The problem is that in the obj.Save() method I get an "The connection must be valid and open" exception I tracked down the problem to this line: // Loads a SubSonic ActiveRecord object User user = new User(User.Columns.Username, "John Doe"); in this Constructor of the User class a method "LoadParam" is called which eventually does if (rdr != null) rdr.Close(); It looks like the rdr.Close() implicitly closes my connection which is fine when using the AutomaticConnection. But during a transaction it is usally not a good idea to close the connection :-) My Question is if this is by design or if it's an error in the MySqlDataReader.

    Read the article

  • How do i enable transactions

    - by acidzombie24
    I have a similar question of how to check if you are in a transaction. Instead of checking how do i allow nested transactions? I am using Microsoft SQL File Database with ADO.NET. I seen examples using tsql and examples starting transactions using begin and using transaction names. When calling connection.BeginTransaction i call another function pass in the same connection and it calls BeginTransaction again which gives me the exception SqlConnection does not support parallel transactions. It appears many microsoft variants allow this but i cant figure out how to do it with my .mdf file. How do i allow nested transactions with a Microsoft SQL File Database using C# and ADO.NET?

    Read the article

  • Unable to commit to Subversion

    - by Ewan Makepeace
    I have a client who had to rebuild his automated build server. He checked out his project folder from my subversion server but is now no longer able to commit - he gets this error: Error: Commit failed (details follow): Error: Cannot write to the prototype revision file of transaction '551-1' because a Error: previous representation is currently being written by another process Finished!: I have searched Google but although this error has been often reported there is no clear explanation - does anyone on StackOverflow have a solution? UPDATE: Nobody else commits to that repository, so it was not a transaction stuck (at least not from another user). In the end we found that permissions were not set correctly. Not that you would know it from this message, but that fixed the problem.

    Read the article

  • Recordset manipulation in SSIS

    - by JSacksteder
    In my SSIS job, I have a need to accumulate a set of rows and commit them all transitionally when processing has completed successfully. If this was pure SQL, I would use a temp table inside a transaction. In SSIS there are a number of issues complicating this. It's difficult to have multiple components share the same transaction and having temp tables that do not exist at design time is a pain. If I use Recordsets inside SSIS for this purpose, there are other issues. My understanding is that an 'Execute SQL' component will re-initialize the Recordset when it runs, so I can't use that to append an additional row. Is there a way to create an OLE DB connection that references an in-memory Recordset? Is there a better way to achieve this result?

    Read the article

  • iPhone In-App Purchase Store Kit error -1003 "Cannot connect to iTunes Store"

    - by Rei
    Hi all- I've been working on adding in-app purchases and was able to create and test in-app purchases using Store Kit (yay!). During testing, I exercised my app in a way which caused the app to crash mid purchase (so I guess the normal cycle of receiving paymentQueue:updatedTransactions and calling finishTransaction was interrupted). Now I am unable to successfully complete any transactions and instead am getting only transactions with transactionState SKPaymentTransactionStateFailed when paymentQueue:updatedTransactions is called. The transaction.error.code is -1003 and the transaction.error.localizedDescription is "Cannot connect to iTunes Store"! I have tried removing all products from iTunesConnect, and rebuilt them using different identifiers but that did not help. I have also tried using the App Store app to really connect to the real App Store and download some apps so I do have connectivity. Finally, I have visited the Settings:Store app to make sure I am signed out of my normal app store account. Any ideas? -Rei

    Read the article

  • versioning fails for onetomany collection holder

    - by Alexander Vasiljev
    given parent entity @Entity public class Expenditure implements Serializable { ... @OneToMany(mappedBy = "expenditure", cascade = CascadeType.ALL, orphanRemoval = true) @OrderBy() private List<ExpenditurePeriod> periods = new ArrayList<ExpenditurePeriod>(); @Version private Integer version = 0; ... } and child one @Entity public class ExpenditurePeriod implements Serializable { ... @ManyToOne @JoinColumn(name="expenditure_id", nullable = false) private Expenditure expenditure; ... } While updating both parent and child in one transaction, org.hibernate.StaleObjectStateException is thrown: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): Indeed, hibernate issues two sql updates: one changing parent properties and another changing child properties. Do you know a way to get rid of parent update changing child? The update results both in inefficiency and false positive for optimistic lock. Note, that both child and parent save their state in DB correctly. Hibernate version is 3.5.1-Final

    Read the article

  • Catch all exceptions in Scala 2.8 RC1

    - by Michel Krämer
    I have the following dummy Scala code in the file test.scala: class Transaction { def begin() {} def commit() {} def rollback() {} } object Test extends Application { def doSomething() {} val t = new Transaction() t.begin() try { doSomething() t.commit() } catch { case _ => t.rollback() } } If I compile this on Scala 2.8 RC1 with scalac -Xstrict-warnings test.scala I'll get the following warning: test.scala:16: warning: catch clause swallows everything: not advised. case _ => t.rollback() ^ one warning found So, if catch-all expressions are not advised, how am I supposed to implement such a pattern instead? And apart from that why are such expressions not advised anyhow?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >