Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 10/95 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Load Testing Java Web Application - find TPS / Avg transaction response time

    - by Steve
    I would like to build my own load testing tool in Java with the goal of being able to load test a web application I am building throughout the development cycle. The web application will be receiving server to server HTTP Post requests and I would like to find its starting transaction per second (TPS) capacity along with the avgerage response time. The Post request and response messages will be in XML (I dont' think that's really applicable though :) ). I have written a very simple Java app to send transactions and count how many transactions it was able to send in one second (1000 ms) however I don't think this is the best way to load test. Really what I want is to send any number of transactions at exactly the same time - i.e. 10, 50, 100 etc. Any help would be appreciated! Oh and here is my current test app code: Thread[] t = new Thread[1]; for (int a = 0; a < t.length; a++) { t[a] = new Thread(new MessageLoop()); } startTime = System.currentTimeMillis(); System.out.println(startTime); for (int a = 0; a < t.length; a++) { t[a].start(); } while ((System.currentTimeMillis() - startTime) < 1000 ) { } if ((System.currentTimeMillis() - startTime) > 1000 ) { for (int a = 0; a < t.length; a++) { t[a].interrupt(); } } long endTime = System.currentTimeMillis(); System.out.println(endTime); System.out.println("Total time: " + (endTime - startTime)); System.out.println("Total transactions: " + count); private static class MessageLoop implements Runnable { public void run() { try { //Test Number of transactions while ((System.currentTimeMillis() - startTime) < 1000 ) { // SEND TRANSACTION HERE count++; } } catch (Exception e) { } } }

    Read the article

  • Strange befaviour of spring transaction support for JPA + Hibernate +@Transactional annotation

    - by abovesun
    I found out really strange behavior on relatively simple use case, probably I can't understand it because of not deep knowledges of spring @Transactional nature, but this is quite interesting. I have simple User dao that extends spring JpaDaoSupport class and contains standard save method: @Transactional public User save(User user) { getJpaTemplate().persist(user); return user; } If was working fine until I've add new method to same class: User getSuperUser(), this method should return user with isAdmin == true, and if there is no super user in db, method should create one. Thats how it was looking like: public User createSuperUser() { User admin = null; try { admin = (User) getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { return em.createQuery("select u from UserImpl u where u.admin = true").getSingleResult(); } }); } catch (EmptyResultDataAccessException ex) { User admin = new User('login', 'password'); admin.setAdmin(true); save(admin); // THIS IS THE POINT WHERE STRANGE THING COMING OUT } return admin; } As you see code is strange forward and I was very confused when found out that no transaction was created and committed on invocation of save(admin) method and no new user wasn't actually created despite @Transactional annotation. In result we have situation: when save() method invokes from outside of UserDAO class - @Transactional annotation counted and user successfully created, but if save() invokes from inside of other method of the same dao class - @Transactional annotation ignored. Here how I was change save() method to force it always create transaction. public User save(User user) { getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); return null; } }); return user; } As you see I manually invoke begin and commit. Any ideas?

    Read the article

  • How to use Transaction in Entity FrameWork?

    - by programmerist
    How to use Transaction in Entity FrameWork? i read some links on Stackoverflow : http://stackoverflow.com/questions/815586/entity-framework-using-transactions-or-savechangesfalse-and-acceptallchanges BUT; i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { // do someyihng... testCtx.Personel.SaveChanges(); // do someyihng... testCtx.Prim.SaveChanges(); // do someyihng... testCtx.Finans.SaveChanges(); scope .Complete(); success = true; } } How can i do that?

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • Identifying mail account used in CRAM-MD5 transaction

    - by ManiacZX
    I suppose this is one of those where the tool for identifying the problem is also the tool used for taking advantage of it. I have a mail server that I am seeing emails that spam is being sent through it. It is not an open relay, the messages in question are being sent by someone authenticating to the smtp with CRAM-MD5. However, the logs only capture the actual data passed, which has been hashed so I cannot see what user account is being used. My suspicion is a simple username/password combo or a user account's password has otherwise been compromised, but I cannot do much about it without knowing what user it is. Of course I can block the IP that is doing it, but that doesn't fix the real problem. I have both the CRAM-MD5 Base64 challenge string and the hashed client auth string containing the username, password and challenge string. I am looking for a way to either reverse this (which I haven't been able to find any information on) or otherwise I suppose I need a dictionary attack tool designed for CRAM-MD5 to run through two lists, one for username and one for password and the constant of the challenge string until it finds a matching result of the authentication string I have logged. Any information on reversing using the data I have logged, a tool to identify it or any alternative methods you have used for this situation would be greatly appreciated.

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • SQL Server Transaction Log RAID

    - by Eric Maibach
    We have three SQL Server servers, and each server has a about five or six databases on it. We are in the process of moving these servers to a new SAN and I am working on the best RAID configuration. Currently all of the log files for all of the databases share a RAID array, there is nothing else on this RAID array except for the log files, but all of the databases use this same array for their log files. I have read that it is best to have log files on separate disks. But in our case I am not sure whether it would be best to have one big array with about 8 drives that all the log files are on. Or would it be better to create four two disk arrays and give some of the larger databases their own dedicated disks for their log files?

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Entity Framework without Transaction?

    - by Sue
    Is there a way to use EF without transaction? I have very simple single insert and don't want to roll-back when something goes wrong as there may be a trigger logging then raising error from the DB side which I have no control over. I just want to insert then catch any exceptions but don't want to roll-back.

    Read the article

  • mysql innodb max size of transaction

    - by chris
    Using mysql 5.1.41 and innodb I'm doing some data import, but can't use load data infile, so I'm manually issuing insert statements. I found that it's much faster to disable auto commit and issue say, 100 insert statements and then commit, instead of the implicit commit after each insert. It got me thinking, what limits are there to how much data I can put into a transaction? Is there a limit on the number of statements, or does it have to do with the size in bytes etc...?

    Read the article

  • How to detect a pending JDO transaction?

    - by Stevko
    I believe I am getting JDO commit Exceptions due to the transactions nesting although I'm not sure. Will this detect the situation where I am starting a transaction when another is pending? PersistenceManager pm = PersistenceManagerFactory.get().getPersistenceManager(); assert pm.currentTransaction().isActive() == false : "arrrgh"; pm.currentTransaction().begin(); Is there a better or more reliable way?

    Read the article

  • Transaction Management

    - by Senthilnathan
    Hi all, In my code I am updating two table, one after the other. update(table1_details); update(table2_details); So if the update fails in table1 , table2 should not be updated or should be rolled back. How to handle this situation. I know I have to use transaction. Can some one help with code !!! I am using Java with spring and hibernate .

    Read the article

  • How to set the transaction isolation level of a

    - by Michael Wiles
    How do I set the global transaction isolation level for a postgres data source. I'm running on jboss and I'm using hibernate to connect. I know that I can set the isolation level from hibernate, does this work for Postgres? This would be by setting the hibernate.connection.isolation hibernate property to 1,2,4,8 - the various values of the relevant static fields. I'm using the org.postgresql.xa.PGXADataSource

    Read the article

  • The PROMOTE TRANSACTION request failed because there is no local transaction active.

    - by Mark J Miller
    Under what circumstances would I see the above message? I have a single call to SQL Server which is wrapped in a call to TransactionScope. In our development and QA environments MSDTC is turned off and the call succeeds fine. However, in our production environment with MSDTC turned on we are failing with this call. Is there something that would cause this when I am sure we are not looking at a distributed transaction call at all?

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >