Search Results

Search found 2993 results on 120 pages for 'distributed transactions'.

Page 8/120 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Distributed computing for a company? Is there such a 'free' thing?

    - by Jakub
    I am new to the whole distributed computing / cloud thing. But I had an idea at work for our multimedia stuff like movie encoding / cpu intensive things tasks (which sometimes take a few hours). Is there a 'free' (linux?) way to go about using a Windows machine, and offsetting those cpu cycles for that task to say 10 servers that are generally idle (cpu wise)? I'm just curious if there is a way to do this or am I just grasping at straws here. My thought is that a 'cloud' setup would achieve this, however like I stated initially, I am a total newbie when it comes to it. This is just an idea, looking for some thoughts? Anyone achieve this?

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • A leader election algorithm for an oriented hypercube

    - by mick
    I'm stuck with some problem where I have to design a leader election algorithm for an oriented hypercube. This should be done by using a tournament with a number of rounds equal to the dimension D of the hypercube. In each stage d, with 1 <= d < D two candidate leaders of neighbouring d-dimensional hypercubes should compete to become the single candidate leader of the (d+1)-dimensional hypercube that is the union of their respective hypercubes.

    Read the article

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • Large transactions causing "System.Data.SqlClient.SqlException: Timeout expired" error?

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

  • What is "read operations inside transactions can't allow failover" ?

    - by Kenyth
    From time to time I got the following exception message on GAE for my GAE/J app. I searched with Google, no relevant results were found. Does anyone know about this? Thanks in advance for any response! The exception message is as below: Nested in org.springframework.orm.jpa.JpaSystemException: Illegal argument; nested exception is javax.persistence.PersistenceException: Illegal argument: java.lang.IllegalArgumentException: read operations inside transactions can't allow failover at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java: 34) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java: 67) at com.google.appengine.api.datastore.DatastoreServiceImpl $1.run(DatastoreServiceImpl.java:128) at com.google.appengine.api.datastore.TransactionRunner.runInTransaction(TransactionRunner.java: 30) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 111) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 84) at com.google.appengine.api.datastore.DatastoreServiceImpl.get(DatastoreServiceImpl.java: 77) at org.datanucleus.store.appengine.RuntimeExceptionWrappingDatastoreService.get(RuntimeExceptionWrappingDatastoreService.java: 53) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 94) at org.datanucleus.store.appengine.DatastorePersistenceHandler.get(DatastorePersistenceHandler.java: 106) at org.datanucleus.store.appengine.DatastorePersistenceHandler.fetchObject(DatastorePersistenceHandler.java: 464) at org.datanucleus.state.JDOStateManagerImpl.loadUnloadedFieldsInFetchPlan(JDOStateManagerImpl.java: 1627) at org.datanucleus.state.JDOStateManagerImpl.loadFieldsInFetchPlan(JDOStateManagerImpl.java: 1603) at org.datanucleus.ObjectManagerImpl.performDetachAllOnCommitPreparation(ObjectManagerImpl.java: 3192) at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java: 2931) at org.datanucleus.TransactionImpl.internalPreCommit(TransactionImpl.java: 369) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:256) at org.datanucleus.jpa.EntityTransactionImpl.commit(EntityTransactionImpl.java: 104) at org.datanucleus.store.appengine.jpa.DatastoreEntityTransactionImpl.commit(DatastoreEntityTransactionImpl.java: 55) at name.kenyth.playtweets.service.Tx.run(Tx.java:39) at name.kenyth.playtweets.web.controller.TwitterApiController.persistStatus(TwitterApiController.java: 309) at name.kenyth.playtweets.web.controller.TwitterApiController.processStatusesForWebCall(TwitterApiController.java: 271) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates_aroundBody0(TwitterApiController.java: 247) at name.kenyth.playtweets.web.controller.TwitterApiController $AjcClosure1.run(TwitterApiController.java:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7proceed(AuthenticationEnforcement.aj:1) at name.kenyth.playtweets.web.refine.AuthenticationEnforcement.ajc $around$name_kenyth_playtweets_web_refine_AuthenticationEnforcement $2$439820b7(AuthenticationEnforcement.aj:168) at name.kenyth.playtweets.web.controller.TwitterApiController.getHomeTimelineUpdates(TwitterApiController.java: 129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Method.java:43) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java: 710) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java: 167) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java: 414) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java: 402) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java: 771) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java: 716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java: 647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java: 552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:693) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java: 511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 390) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.tuckey.web.filters.urlrewrite.NormalRewrittenUrl.doRewrite(NormalRewrittenUrl.java: 195) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java: 159) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java: 141) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java: 90) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java: 417) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java: 71) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java: 88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java: 76) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java: 97) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java: 35) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java: 238) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.headerComplete(HttpConnection.java:923) at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java: 76) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java: 135) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java: 250) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5838) at com.google.apphosting.base.RuntimePb$EvaluationRuntime $6.handleBlockingRequest(RuntimePb.java:5836) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java: 24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication

    Read the article

  • Your Transaction is in Jeopardy -- and You Can't Even Know It!

    - by Adam Machanic
    If you're reading this, please take one minute out of your day and vote for the following Connect item : https://connect.microsoft.com/SQLServer/feedback/details/444030/sys-dm-tran-active-transactions-transaction-state-not-updated-when-an-attention-event-occurs If you're really interested, take three minutes: run the steps to reproduce the issue, and then check the box that says that you were able to reproduce the issue. Why? Imagine that ten hours ago you started a big transaction. You're sitting...(read more)

    Read the article

  • Your Transaction is in Jeopardy -- and You Can't Even Know It!

    - by Adam Machanic
    If you're reading this, please take one minute out of your day and vote for the following Connect item : https://connect.microsoft.com/SQLServer/feedback/details/444030/sys-dm-tran-active-transactions-transaction-state-not-updated-when-an-attention-event-occurs If you're really interested, take three minutes: run the steps to reproduce the issue, and then check the box that says that you were able to reproduce the issue. Why? Imagine that ten hours ago you started a big transaction. You're sitting...(read more)

    Read the article

  • How does SQL Server treat statements inside stored procedures with respect to transactions?

    - by Sleepless
    Hi All! Say I have a stored procedure consisting of several seperate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic. How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure? Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio? Thanks!

    Read the article

  • Have I to count transactions before rollback one in catch block in T-SQL?

    - by abatishchev
    I have next block in the end of each my stored procedure for SQL Server 2008 BEGIN TRY BEGIN TRAN -- my code COMMIT END TRY BEGIN CATCH IF (@@trancount > 0) BEGIN ROLLBACK DECLARE @message NVARCHAR(MAX) DECLARE @state INT SELECT @message = ERROR_MESSAGE(), @state = ERROR_STATE() RAISERROR (@message, 11, @state) END END CATCH Is it possible to switch CATCH-block to BEGIN CATCH ROLLBACK DECLARE @message NVARCHAR(MAX) DECLARE @state INT SELECT @message = ERROR_MESSAGE(), @state = ERROR_STATE() RAISERROR (@message, 11, @state) END CATCH or just BEGIN CATCH ROLLBACK END CATCH ?

    Read the article

  • DO NOT BOTHER. THE PROBLEM IS NOT TRANSACTIONS. DUNNO HOW TO CLOSE / DELETE

    - by matti
    THE PROBLEM IS SOLVED. I DUNNO HOW TO CLOSE THIS QUESTION. ANOTHER SERVICE IS PROPABLY RUNNING AT ALMOST SAME SYNC THAT DOES THE THING. MY WORK "MATE" WILL HEAR A THING OR TWO IN THE MORNING. I have following code in windows service: data is in the dataset and it has been populated earlier and then rows are updated and inserted to dataset tables before calling code below. using (dataSetTxn = _cnctn.BeginTransaction()) { try { _somePlanAdptr.UpdateCommand.Transaction = dataSetTxn; _someLogAdptr.InsertCommand.Transaction = dataSetTxn; _someLinkAdptr.InsertCommand.Transaction = dataSetTxn; _someDataAdptr.InsertCommand.Transaction = dataSetTxn; _log.WriteDebug("Updating SomePlanAction"); _somePlanAdptr.Update(_ofDataSet, "SomePlanAction"); _log.WriteDebug(string.Format("Inserting {0} rows to SomeLog")); //error _someLogAdptr.Update(_ofDataSet, "SomeLog"); _log.WriteDebug(string.Format("Updating SomeLink with {0} rows", _ofDataSet.Tables["SomeLink"].Rows.Count)); _someLinkAdptr.Update(_ofDataSet, "SomeLink"); _log.WriteDebug(string.Format("Updating SomeData with {0} rows", _ofDataSet.Tables["SomeData"].Rows.Count)); _someDataAdptr.Update(_ofDataSet, "SomeData"); _log.WriteDebug("Commiting all changes to database."); dataSetTxn.Commit(); } catch (Exception e) { _log.WriteError("Updating database failed -> rollback", e); if (dataSetTxn != null && _cnctn.State == ConnectionState.Open) dataSetTxn.Rollback(); throw e; } } so one of the lines causes an exception log.WriteDebug(string.Format("Inserting {0} rows to SomeLog")); //error still the first adapter's data is updated to database. when debugged the data is not yet updated until the method that calls all (including this bit) exits. The method that does all is timed with System.Threading.Timer. Thanks & BR -Matti

    Read the article

  • How can I ensure that nested transactions are committed independently of each other?

    - by Caldera
    If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others? In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)? I have a stored procedure defined something like this in SQL Server 2000: CREATE PROCEDURE toplevel_proc .. AS BEGIN ... while @row_count <= @max_rows begin select @parameter ... where rownum = @row_count exec nested_proc @parameter select @row_count = @row_count + 1 end END

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Distributed C++ game server which use database.

    - by Slav
    Hello. My C++ turn-based game server (which uses database) does stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases). Is there some tutorials/books/common standards which explain how to do it in a best way?

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >