Search Results

Search found 2636 results on 106 pages for 'transaction isolation'.

Page 24/106 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • show UIAlertView when In app purchase is in progress

    - by edie
    Hi... I've added an UIAlertView that has UIActivityIndicatior as a subview on my application. This alertView only show when the purchase is in progress. I've put my alert view in this way in my StoreObserver: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions { for (SKPaymentTransaction *transaction in transactions) { switch (transaction.transactionState) { case SKPaymentTransactionStatePurchasing: [self stillPurchasing]; // this creates an alertView and shows break; case SKPaymentTransactionStatePurchased: [self completeTransaction:transaction]; break; case SKPaymentTransactionStateFailed: [self failedTransaction:transaction]; break; case SKPaymentTransactionStateRestored: [self restoreTransaction:transaction]; break; default: break; } } } - (void) stillPurchasing { UIAlertView *alert = [[UIAlertView alloc]initWithTitle: @"In App Purchase" message: @"Processing your purchase..." delegate: nil cancelButtonTitle: nil otherButtonTitles: nil]; self.alertView = alert; [alert release]; UIActivityIndicatorView *ind = [[UIActivityIndicatorView alloc]initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge]; self.indicator = ind; [ind release]; [self.indicator startAnimating]; [self.alertView addSubview: self.indicator]; [self.alertView show]; } When I tap my the buy button this UIAlertView shows together with my UIActivityIndicator.. But when the transaction completes the alertView still on the top of the view and the Indicator was the only one that was removed. My question was how should I release the alertView? Or where/When should I release it. I've added these command to release my alertView and Indicator on these cases: case SKPaymentTransactionStatePurchased: case SKPaymentTransactionStateFailed: case SKPaymentTransactionStateRestored: [self.indicator stopAnimating]; [self.indicator removeFromSuperview]; [self.alertView release]; [self.indicator release]; I've only added the alertView to show that the purchasing was still in progress. Any suggestion to create any feedback to users will be thankful for me.. Thanks

    Read the article

  • Ruby on Rails - Primary and Foreign key

    - by Eef
    Hey, I am creating a site in Ruby on Rails, I have two models a User model and a Transaction model. These models both belong to an account so they both have a field called account_id I am trying to setup a association between them like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user end I am using these associations like so: user = User.find(1) transactions = user.transactions At the moment the application is trying to find the transactions with the user_id, here is the SQL it generates: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 1) This is incorrect as I would like the find the transactions via the account_id, I have tried setting the associations like so: class User < ActiveRecord::Base belongs_to :account has_many :transactions, :primary_key => :account_id, :class_name => "Transaction" end class Transaction < ActiveRecord::Base belongs_to :account belongs_to :user, :foreign_key => :account_id, :class_name => "User" end This almost achieves what I am looking to do and generates the following SQL: Mysql::Error: Unknown column 'transactions.user_id' in 'where clause': SELECT * FROM `transactions` WHERE (`transactions`.user_id = 104) The number 104 is the correct account_id but it is still trying to query the transaction table for a user_id field. Could someone give me some advice on how I setup the associations to query the transaction table for the account_id instead of the user_id resulting in a SQL query like so: SELECT * FROM `transactions` WHERE (`transactions`.account_id = 104) Cheers Eef

    Read the article

  • Is something along the lines of nested memoization needed here?

    - by Daniel
    System.Transactions notoriously escalates transactions involving multiple connections to the same database to the DTC. The module and helper class, ConnectionContext, below are meant to prevent this by ensuring multiple connection requests for the same database return the same connection object. This is, in some sense, memoization, although there are multiple things being memoized and the second is dependent on the first. Is there some way to hide the synchronization and/or mutable state (perhaps using memoization) in this module, or perhaps rewrite it in a more functional style? (It may be worth nothing that there's no locking when getting the connection by connection string because Transaction.Current is ThreadStatic.) type ConnectionContext(connection:IDbConnection, ownsConnection) = member x.Connection = connection member x.OwnsConnection = ownsConnection interface IDisposable with member x.Dispose() = if ownsConnection then connection.Dispose() module ConnectionManager = let private _connections = new Dictionary<string, Dictionary<string, IDbConnection>>() let private getTid (t:Transaction) = t.TransactionInformation.LocalIdentifier let private removeConnection tid = let cl = _connections.[tid] for (KeyValue(_, con)) in cl do con.Close() lock _connections (fun () -> _connections.Remove(tid) |> ignore) let getConnection connectionString (openConnection:(unit -> IDbConnection)) = match Transaction.Current with | null -> new ConnectionContext(openConnection(), true) | current -> let tid = getTid current // get connections for the current transaction let connections = match _connections.TryGetValue(tid) with | true, cl -> cl | false, _ -> let cl = Dictionary<_,_>() lock _connections (fun () -> _connections.Add(tid, cl)) cl // find connection for this connection string let connection = match connections.TryGetValue(connectionString) with | true, con -> con | false, _ -> let initial = (connections.Count = 0) let con = openConnection() connections.Add(connectionString, con) // if this is the first connection for this transaction, register connections for cleanup if initial then current.TransactionCompleted.Add (fun args -> let id = getTid args.Transaction removeConnection id) con new ConnectionContext(connection, false)

    Read the article

  • Exit and rollback everything in script on error

    - by Jan W.
    Hey guys ! I'm in a bit of a pickle here. I have a TSQL script that does a lot of database structure adjustments but it's not really safe to just let it go through when something fails. to make things clear: using MS SQL 2005 it's NOT a stored procedure, just a script file (.sql) what I have is something in the following order BEGIN TRANSACTION ALTER Stuff GO CREATE New Stuff GO DROP Old Stuff GO IF @@ERROR != 0 BEGIN PRINT 'Errors Found ... Rolling back' ROLLBACK TRANSACTION RETURN END ELSE PRINT 'No Errors ... Committing changes' COMMIT TRANSACTION just to illustrate what I'm working with ... can't go into specifics now, the problem ... When I introduce an error (to test if things get rolled back), I get a statement that the ROLLBACK TRANSACTION could not find a corresponding BEGIN TRANSACTION. This leads me to believe that something when REALLY wrong and the transaction was already killed. what I also noticed is that the script didn't fully quit on error and thus DID try to execute every statement after the error occured. (I noticed this when new tables showed up when I wasn't expecting them because it should have rollbacked) any help in this department is welcome if more speficics are needed, ask! greetz

    Read the article

  • Hibernate saveOrUpdate problem on char data type

    - by Yashwant Chavan
    Hi I am using Hibernate 3.0 , facing issue related to the char datatype field. I am trying to save Client pojo in the database, using following method. Problem is my client_id field is char(10) in the database. when client_id is 10 characters it works fine. but when client_id is less than ten characters it gives problem at the time of update. Rather than updating data it try to insert cleint record again and gives the unquie key exeception. i drill down the problem because of char(10) client_id field. it keeps space after client_id value upto 10 characters. Is there is any confuguration to overcome this problem. rather than modifying client_id to varchar2. public boolean saveClient(Clnt client) { boolean lReturnValue = false; SessionFactory sessionFactory = null; Session session = null; Transaction transaction = null; try { HibernateTemplate hibernateTemplate = getHibernateTemplate(); sessionFactory = hibernateTemplate.getSessionFactory(); session = sessionFactory.getCurrentSession(); transaction = session.beginTransaction(); session.saveOrUpdate(client); transaction.commit(); lReturnValue = true; } catch (HibernateException e) { lReturnValue = false; // TODO Auto-generated catch block if (transaction != null) { transaction.rollback(); } e.printStackTrace(); } return lReturnValue; }

    Read the article

  • What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?

    - by mrt181
    I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable<Position> Get() { var result = this.session.Query<Position>().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable<Position> positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?

    Read the article

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

  • Connection hangs after time of inactivity

    - by Sinuhe
    In my application, Spring manages connection pool for database access. Hibernate uses these connections for its queries. At first glance, I have no problems with the pool: it works correctly with concurrent clients and a pool with only one connection. I can execute a lot of queries, so I think that I (or Spring) don't leave open connections. My problem appears after some time of inactivity (sometimes 30 minutes, sometimes more than 2 hours). Then, when Hibernate does some search, it lasts too much. Setting log4j level to TRACE, I get this logs: ... 18:27:01 DEBUG nsactionSynchronizationManager - Retrieved value [org.springframework.orm.hibernate3.SessionHolder@99abd7] for key [org.hibernate.impl.SessionFactoryImpl@7d2897] bound to thread [http-8080-Processor24] 18:27:01 DEBUG HibernateTransactionManager - Found thread-bound Session [org.hibernate.impl.SessionImpl@8878cd] for Hibernate transaction 18:27:01 DEBUG HibernateTransactionManager - Using transaction object [org.springframework.orm.hibernate3.HibernateTransactionManager$HibernateTransactionObject@1b2ffee] 18:27:01 DEBUG HibernateTransactionManager - Creating new transaction with name [com.acjoventut.service.GenericManager.findByExample]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT 18:27:01 DEBUG HibernateTransactionManager - Preparing JDBC Connection of Hibernate Session [org.hibernate.impl.SessionImpl@8878cd] 18:27:01 TRACE SessionImpl - setting flush mode to: AUTO 18:27:01 DEBUG JDBCTransaction - begin 18:27:01 DEBUG ConnectionManager - opening JDBC connection Here it gets frozen for about 2 - 10 minutes. But then continues: 18:30:11 DEBUG JDBCTransaction - current autocommit status: true 18:30:11 DEBUG JDBCTransaction - disabling autocommit 18:30:11 TRACE JDBCContext - after transaction begin 18:30:11 DEBUG HibernateTransactionManager - Exposing Hibernate transaction as JDBC transaction [jdbc:oracle:thin:@212.31.39.50:30998:orcl, UserName=DEVELOP, Oracle JDBC driver] 18:30:11 DEBUG nsactionSynchronizationManager - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@843a9d] for key [org.apache.commons.dbcp.BasicDataSource@7745fd] to thread [http-8080-Processor24] 18:30:11 DEBUG nsactionSynchronizationManager - Initializing transaction synchronization ... After that, it works with no problems, until another period of inactivity. IMHO, it seems like connection pool returns an invalid/closed connection, and when Hibernate realizes that, ask another connection to the pool. I don't know how can I solve this problem or things I can do for delimiting it. Any help achieving this will be appreciate. Thanks. EDIT: Well, it finally was due a firewall rule. Database detects the connection is lost, but pool (dbcp or c3p0) not. So, it tries to query the database with no success. What is still strange for me is that timeout period is very variable. Maybe the rule is specially strange or firewall doesn't work correctly. Anyway, I have no access to that machine and I can only wait for an explanation. :(

    Read the article

  • Suppress "running out of disk space" Message (per drive) on Windows Server 2003

    - by Shoeless
    We have a database server with separate drives for OS, various data files and the transaction log. Our transaction log spills over onto other volumes as well- this is expected behavior. The problem is that we are constantly getting popups that our transaction log drive is out of space (and that I can free space by deleting old or unnecessary files). Is there some way to prevent this message from popping up for this particular drive?

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • "bad record MAC" SSL error between Java and PortgreSQL

    - by Stéphane Bagnier
    Hello there ! We've got here a problem of random disconnections between our Java apps and our PostgreSQL 8.3 server with a "bad record MAC" SSL error. We run Debian / Lenny on both side. On the client side, we see : 2010-03-09 02:36:27,980 WARN org.hibernate.util.JDBCExceptionReporter.logExceptions(JDBCExceptionReporter.java:100) - SQL Error: 0, SQLState: 08006 2010-03-09 02:36:27,980 ERROR org.hibernate.util.JDBCExceptionReporter.logExceptions(JDBCExceptionReporter.java:101) - An I/O error occured while sending to the backend. 2010-03-09 02:36:27,981 ERROR org.hibernate.transaction.JDBCTransaction.toggleAutoCommit(JDBCTransaction.java:232) - Could not toggle autocommit org.postgresql.util.PSQLException: An I/O error occured while sending to the backend. at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:220) at org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:650) at org.postgresql.jdbc2.AbstractJdbc2Connection.commit(AbstractJdbc2Connection.java:670) at org.postgresql.jdbc2.AbstractJdbc2Connection.setAutoCommit(AbstractJdbc2Connection.java:633) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.jdbc.datasource.SingleConnectionDataSource$CloseSuppressingInvocationHandler.invoke(SingleConnectionDataSource.java:336) at $Proxy17.setAutoCommit(Unknown Source) at org.hibernate.transaction.JDBCTransaction.toggleAutoCommit(JDBCTransaction.java:228) at org.hibernate.transaction.JDBCTransaction.rollbackAndResetAutoCommit(JDBCTransaction.java:220) at org.hibernate.transaction.JDBCTransaction.rollback(JDBCTransaction.java:196) at org.hibernate.ejb.TransactionImpl.rollback(TransactionImpl.java:85) at org.springframework.orm.jpa.JpaTransactionManager.doRollback(JpaTransactionManager.java:482) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processRollback(AbstractPlatformTransactionManager.java:823) at org.springframework.transaction.support.AbstractPlatformTransactionManager.rollback(AbstractPlatformTransactionManager.java:800) at org.springframework.transaction.interceptor.TransactionAspectSupport.completeTransactionAfterThrowing(TransactionAspectSupport.java:339) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:110) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:635) ... Caused by: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1255) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1267) at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:43) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at org.postgresql.core.PGStream.flush(PGStream.java:508) at org.postgresql.core.v3.QueryExecutorImpl.sendSync(QueryExecutorImpl.java:692) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:193) ... 22 more Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1611) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1569) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:850) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:746) at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:135) at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:104) at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:186) at org.postgresql.core.PGStream.Receive(PGStream.java:445) at org.postgresql.core.PGStream.ReceiveTupleV3(PGStream.java:350) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1322) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) at org.hibernate.loader.Loader.getResultSet(Loader.java:1808) at org.hibernate.loader.Loader.doQuery(Loader.java:697) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) at org.hibernate.loader.Loader.loadCollection(Loader.java:2015) at org.hibernate.loader.collection.CollectionLoader.initialize(CollectionLoader.java:59) at org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:587) at org.hibernate.event.def.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:83) at org.hibernate.impl.SessionImpl.initializeCollection(SessionImpl.java:1743) at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:366) at org.hibernate.collection.PersistentSet.add(PersistentSet.java:212) ... the cypher suite SSL_RSA_WITH_RC4_128_SHA was used. We tried on the client side : the OpenJDK package the sun JDK package the sun tar package the libbcprov-java package the PostgreSQL driver 8.3 instead of 8.4 On the server side we see : 2010-03-01 08:26:05 CET [18513]: [161833-1] LOG: SSL error: sslv3 alert bad record mac 2010-03-01 08:26:05 CET [18513]: [161834-1] LOG: could not receive data from client: Connection reset by peer 2010-03-01 08:26:05 CET [18513]: [161835-1] LOG: unexpected EOF on client connection the error type seams to be SSL_R_SSLV3_ALERT_BAD_RECORD_MAC. the SSL layer is configured with : ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' and on the server side we changed the cipher suites to : 'ALL:!SSLv2:!MEDIUM:!AES:!ADH:!LOW:!EXP:!MD5:@STRENGTH' but none of these changes fixed the problem. Suggestions appreciated !

    Read the article

  • SQL query mixing aggregated results and single values

    - by Paul Flowerdew
    I have a table with transactions. Each transaction has a transaction ID, and accounting period (AP), and a posting value (PV), as well as other fields. Some of the IDs are duplicated, usually because the transaction was done in error. To give an example, part of the table might look like: ID PV AP 123 100 2 123 -100 5 In this case the transaction was added in AP2 then removed in AP5. Another example would be: ID PV AP 456 100 2 456 -100 5 456 100 8 In the first example, the problem is that if I am analyzing what was spent in AP2, there is a transaction in there which actually shouldn't be taken into account because it was taken out again in AP5. In the second example, the second two transactions shouldn't be taken into account because they cancel each other out. I want to label as many transactions as possible which shouldn't be taken into account as erroneous. To identify these transactions, I want to find the ones with duplicate IDs whose PVs sum to zero (like ID 123 above) or transactions where the PV of the earliest one is equal to sum(PV), as in the second example. This second condition is what is causing me grief. So far I have SELECT * FROM table WHERE table.ID IN (SELECT table.ID FROM table GROUP BY table.ID HAVING COUNT(*) > 1 AND (SUM(table.PV) = 0 OR SUM(table.PV) = <PV of first transaction in each group>)) ORDER BY table.ID; The bit in chevrons is what I'm trying to do and I'm stuck. Can I do it like this or is there some other method I can use in SQL to do this? Edit 1: Btw I forgot to say that I'm using SQL Compact 3.5, in case it matters. Edit 2: I think the code snippet above is a bit misleading. I still want to mark out transactions with duplicate IDs where sum(PV) = 0, as in the first example. But where the PV of the earliest transaction = sum(PV), as in the second example, what I actually want is to keep the earliest transaction and mark out all the others with the same ID. Sorry if that caused confusion. Edit 3: I've been playing with Clodoaldo's solution and have made some progress, but still can't get quite what I want. I'm trying to get the transactions I know for certain to be erroneous. Suppose the following transactions are also in the table: ID PV AP 789 100 2 789 200 5 789 -100 8 In this example sum(PV) < 0 and the earliest PV < sum(PV) so I don't want to mark any of these out. If I modify Clodoaldo's query as follows: select t.* from t left join ( select id, min(ap) as ap, sum(pv) as sum_pv from t group by id having sum(pv) <> 0 ) s on t.id = s.id and t.ap = s.ap and t.pv = s.sum_pv where s.id is null This gives the result ID PV AP 123 100 2 123 -100 5 456 -100 5 456 100 8 789 100 3 789 200 5 789 -100 8 Whilst the first 4 transactions are ok (they would be marked out), the 789 transactions are also there, and I don't want them. But I can't figure out how to modify the query so that they're not included. Any ideas?

    Read the article

  • Advantage database throws an exception when attempting to delete a record with a like statement used

    - by ChrisR
    The code below shows that a record is deleted when the sql statement is: select * from test where qty between 50 and 59 but the sql statement: select * from test where partno like 'PART/005%' throws the exception: Advantage.Data.Provider.AdsException: Error 5072: Action requires read-write access to the table How can you reliably delete a record with a where clause applied? Note: I'm using Advantage Database v9.10.1.9, VS2008, .Net Framework 3.5 and WinXP 32 bit using System.IO; using Advantage.Data.Provider; using AdvantageClientEngine; using NUnit.Framework; namespace NetworkEidetics.Core.Tests.Dbf { [TestFixture] public class AdvantageDatabaseTests { private const string DefaultConnectionString = @"data source={0};ServerType=local;TableType=ADS_CDX;LockMode=COMPATIBLE;TrimTrailingSpaces=TRUE;ShowDeleted=FALSE"; private const string TestFilesDirectory = "./TestFiles"; [SetUp] public void Setup() { const string createSql = @"CREATE TABLE [{0}] (ITEM_NO char(4), PARTNO char(20), QTY numeric(6,0), QUOTE numeric(12,4)) "; const string insertSql = @"INSERT INTO [{0}] (ITEM_NO, PARTNO, QTY, QUOTE) VALUES('{1}', '{2}', {3}, {4})"; const string filename = "test.dbf"; var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var command = connection.CreateCommand()) { command.CommandText = string.Format(createSql, filename); command.Transaction = transaction; command.ExecuteNonQuery(); } transaction.Commit(); } using (var transaction = connection.BeginTransaction()) { for (var i = 0; i < 1000; ++i) { using (var command = connection.CreateCommand()) { var itemNo = string.Format("{0}", i); var partNumber = string.Format("PART/{0:d4}", i); var quantity = i; var quote = i * 10; command.CommandText = string.Format(insertSql, filename, itemNo, partNumber, quantity, quote); command.Transaction = transaction; command.ExecuteNonQuery(); } } transaction.Commit(); } connection.Close(); } } [TearDown] public void TearDown() { File.Delete("./TestFiles/test.dbf"); } [Test] public void CanDeleteRecord() { const string sqlStatement = @"select * from test"; Assert.AreEqual(1000, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(999, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordBetween() { const string sqlStatement = @"select * from test where qty between 50 and 59"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } [Test] public void CanDeleteRecordWithLike() { const string sqlStatement = @"select * from test where partno like 'PART/005%'"; Assert.AreEqual(10, GetRecordCount(sqlStatement)); DeleteRecord(sqlStatement, 3); Assert.AreEqual(9, GetRecordCount(sqlStatement)); } public int GetRecordCount(string sqlStatement) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); return reader.GetRecordCount(AdsExtendedReader.FilterOption.RespectFilters); } } } public void DeleteRecord(string sqlStatement, int rowIndex) { var connectionString = string.Format(DefaultConnectionString, TestFilesDirectory); using (var connection = new AdsConnection(connectionString)) { connection.Open(); using (var command = connection.CreateCommand()) { command.CommandText = sqlStatement; var reader = command.ExecuteExtendedReader(); reader.GotoBOF(); reader.Read(); if (rowIndex != 0) { ACE.AdsSkip(reader.AdsActiveHandle, rowIndex); } reader.DeleteRecord(); } connection.Close(); } } } }

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • Update: GTAS and EBS

    - by jeffrey.waterman
    Provided below are updated target date timeframes for provided patches for upcoming legislative enhancements.   Dates have been pushed out from previous dates provided due to changes in Treasury mandatory dates.  Mandatory dates for GTAS and IPAC have changes since previous target dates for patches were provided.   These are target dates, not commitments to deliver functionality. Deliverable Target Timeframes for Customer Patches Comments R12 GTAS Configuration Apr 2012 Patch is available GTAS Key Processes Oct/Nov 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Nov/Dec 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC Apr/May 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes May/Jun  2013 Includes GTAS Customer and Vendor  update processes. IPAC Aug/Sep Includes modification required to IPAC to accommodate Componentized TAS and BETC. 11i GTAS Configuration May 2012 Patch is available GTAS Key Processes Nov/Dec 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Dec/Jan 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC May/Jun 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes Jun/Jul 2013 Includes GTAS Customer and Vendor  update processes. IPAC Sep/Oct 2013 Includes modification required to IPAC to accommodate Componentized TAS and BETC.

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • &ldquo;Why do transactional messages all have the same priority?&rdquo;

    - by John Breakwell
    I answered this question on the MSMQ forum on MSDN and thought it worth sharing here. The poster wanted to know why all transactional messages have a fixed priority of zero (instead of 0 through 7). They wanted the guaranteed delivery of messages to a queue but wanted to assign different levels of priority. Some aspects of MSMQ were defined way back in the last century and this is one of them. If I remember right, the reason was to avoid the following scenario: You have a single transaction of 3 messages (a, b and c) with priorities 5, 3 and 4 respectively. The messages are sent in order a,b,c The messages arrive in the queue and are arranged in order a,c,b to reflect priority order This breaks the guaranteed order part of the transaction.  I know that very few people send more than one message in a transaction but that is a scenario that MSMQ has to be able to handle; for the majority, including yourself, this scenario is irrelevant which is why you are surprised by the absence of transactional priorities. The options, therefore, available to the Microsoft developers were to: Implement code that allowed you to send messages with variable priority as long as any messages within the same transaction were the same priority, or Define a set priority for all transactional messages As you can understand, option 1 would be a complicated arrangement with all the necessary enforcement, error handling, user education and documentation, etc. Sure, it would have been possible to implement option 1 but I expect the product group decided to invest the development time in some other aspect of MSMQ. Now, with five versions out there, it would be confusing to change how the product operates, in addition to potentially breaking exisiting systems that have been working fine for years. So, balancing cost and risk against customer demand, I would not expect this feature to ever change.

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • SQL SERVER – Fix : Error : 8501 MSDTC on server is unavailable. Changed database context to publishe

    - by pinaldave
    During configuring replication on one of the server, I received following error. This is very common error and the solution of the same is even simpler. MSDTC on server is unavailable. Changed database context to publisherdatabase. (Microsoft SQL Server, Error: 8501) Solution: Enable “Distributed Transaction Coordinator” in SQL Server. Method 1: Click on Start–>Control Panel->Administrative Tools->Services Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Method 2: Type services.msc in the run command box Select “Services” manager; Hit Enter Select the service “Distributed Transaction Coordinator” Right on the service and choose “Start” Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Replication

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle SALT 11gR1

    - by Maurice Gamanho
    With the 11gR1 release, SALT now supports Web services transactions (WS-TX). In a nutshell, the SALT 11gR1 Web services gateway (GWWS) now supports bi-directional transactional interoperability. What this means is that Tuxedo application services can now be invoked in global transaction context using Web services. This feature is natural to a product like Tuxedo given its history as transaction processing monitor and its significant contribution to the X/Open (now the Open Group) XA specification. We implemented Web Services Coordination (WS-COOR) and Web Services Atomic Transaction (WS-AT). We also tested and certified with WebLogic Server 11gR1 and Microsoft WCF 3.5 (.Net Framework). For more information, please visit the Tuxedo OTN home page, where you can download a document and samples that will help you get started with WS-TX in Tuxedo. You can check the product documentation here.

    Read the article

  • I have written an SQL query but I want to optimize it [closed]

    - by ankit gupta
    is there any way to do this using minimum no of joins and select? 2 tables are involved in this operation transaction_pci_details and transaction SELECT t6.transaction_pci_details_id, t6.terminal_id, t6.transaction_no, t6.transaction_id, t6.transaction_type, t6.reversal_flag, t6.transmission_date_time, t6.retrivel_ref_no, t6.card_no,t6.card_type, t6.expires_on, t6.transaction_amount, t6.currency_code, t6.response_code, t6.action_code, t6.message_reason_code, t6.merchant_id, t6.auth_code, t6.actual_trans_amnt, t6.bal_card_amnt, t5.sales_person_id FROM TRANSACTION AS t5 INNER JOIN ( SELECT t4.transaction_pci_details_id, t4.terminal_id, t4.transaction_no, t4.transaction_id, t4.transaction_type, t4.reversal_flag, t4.transmission_date_time, t4.retrivel_ref_no, t4.card_no, t4.card_type, t4.expires_on, t4.transaction_amount, t4.currency_code, t4.response_code, t4.action_code, t3.message_reason_code, t4.merchant_id, t4.auth_code, t4.actual_trans_amnt, t4.bal_card_amnt FROM ( SELECT* FROM transaction_pci_details WHERE message_reason_code LIKE '%OUT%'|| message_reason_code LIKE '%FAILED%' /*we can add date here*/ UNION ALL SELECT t2.transaction_pci_details_id, t2.terminal_id, t2.transaction_no, t2.transaction_id, t2.transaction_type, t2.reversal_flag, t2.transmission_date_time, t2.retrivel_ref_no, t2.card_no, t2.card_type, t2.expires_on, t2.transaction_amount, t2.currency_code, t2.response_code, t2.action_code, t2.message_reason_code, t2.merchant_id, t2.auth_code, t2.actual_trans_amnt, t2.bal_card_amnt FROM ( SELECT transaction_id FROM TRANSACTION WHERE transaction_type_id = 8 ) AS t1 INNER JOIN ( SELECT * FROM transaction_pci_details WHERE message_reason_code LIKE '%appro%' /*we can add date here*/ ) AS t2 ON t1.transaction_id = t2.transaction_id ) AS t3 INNER JOIN ( SELECT* FROM transaction_pci_details WHERE action_code LIKE '%REQ%' /*we can add date here*/ ) AS t4 ON t3.transaction_pci_details_id - t4.transaction_pci_details_id = 1 ) AS t6 ON t5.transaction_id = t6.transaction_id

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >