Search Results

Search found 2636 results on 106 pages for 'transaction isolation'.

Page 26/106 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • Row insertion order entity framework

    - by Wouter
    I'm using a transaction to insert multiple rows in multiple tables. For these rows I would like to add these rows in order. Upon calling SaveChanges all the rows are inserted out of order. When not using a transaction and saving changes after each insertion does keep order, but I really need a transaction for all entries.

    Read the article

  • How to create an entity with a composite primary key containing a generated value.

    - by David
    Using Hibernate + annotations, I'm trying to do the following: Two entities, Entity1 and Entity2. Entity1 contains a simple generated value primary key. Entity2 primary key is composed by a simple generated value + the id of entity one (with a many to one relationship) Unfortunately, I can't make it work. Here is an excerpt of the code: @Entity public class Entity1 { @Id @GeneratedValue private Long id; private String name; ... } @Entity public class Entity2 { @EmbeddedId private Entity2PK pk = new Entity2PK(); private String miscData; ... } @Embeddable public class Entity2PK implements Serializable { @GeneratedValue private Long id; @ManyToOne private Entity1 entity; } void test() { Entity1 e1 = new Entity1(); e1.setName("nameE1"); Entity2 e2 = new Entity2(); e2.setEntity1(e1); e2.setMiscData("test"); Transaction transaction = session.getTransaction(); try { transaction.begin(); session.save(e1); session.save(e2); transaction.commit(); } catch (Exception e) { transaction.rollback(); } finally { session.close(); } } When I run the test method I get the following errors: Hibernate: insert into Entity1 (id, name) values (null, ?) Hibernate: call identity() Hibernate: insert into Entity2 (miscData, entity_id, id) values (?, ?, ?) 07-Jun-2010 10:51:11 org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: null 07-Jun-2010 10:51:11 org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: failed batch 07-Jun-2010 10:51:11 org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103) at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:254) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1001) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:339) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) at test.App.main(App.java:32) Caused by: java.sql.BatchUpdateException: failed batch at org.hsqldb.jdbc.jdbcStatement.executeBatch(Unknown Source) at org.hsqldb.jdbc.jdbcPreparedStatement.executeBatch(Unknown Source) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:247) ... 8 more Note that I use HSQLDB. Any ideas about what is wrong ?

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Multiple calculations on the same set of data: ruby or database?

    - by Pierre
    Hi, I have a model Transaction for which I need to display the results of many calculations on many fields for a subset of transactions. I've seen 2 ways to do it, but am not sure which is the best. I'm after the one that will have the least impact in terms of performance when data set grows and number of concurrent users increases. data[:total_before] = Transaction.where(xxx).sum(:amount_before) data[:total_after] = Transaction.where(xxx).sum(:amount_after) ... or transactions = Transaction.where(xxx) data[:total_before]= transactions.inject(0) {|s, e| s + e.amount_before } data[:total_after]= transactions.inject(0) {|s, e| s + e.amount_after } ... Which one should I choose? (or is there a 3rd, better way?) Thanks, P.

    Read the article

  • Application not releasing database connection Spring.net + NHibernate

    - by anupam3m
    Even after successful transaction.Application connection with the database persist.in Nhibernate log it shows Nhibernate Log 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - executing flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - registering flush begin 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - registering flush end 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - post flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - before transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - aggressively releasing database connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Connection.ConnectionProvider [(null)] <(null) - Closing connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Transaction.AdoTransaction [(null)] <(null) - running AdoTransaction.Dispose() 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - closing session 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.BatcherImpl [(null)] <(null) - running BatcherImpl.Dispose(true) Underneath given is my dataconfiguration file -- Risco.Rsp.Ac.RMAC.Mapping Risco.Rsp.Ac.Logging.Appenders -- Please help me out with this issue.Thanks

    Read the article

  • How do I combine similar method calls into a delegate pattern?

    - by Daniel T.
    I have three methods: public void Save<T>(T entity) { using (new Transaction()) { Session.Save(entity); } } public void Create<T>(T entity) { using (new Transaction()) { Session.Create(entity); } } public void Delete<T>(T entity) { using (new Transaction()) { Session.Delete(entity); } } As you can see, the only thing that differs is the method call inside the using block. How can I rewrite this so it's something like this instead: public void Save<T>(T entity) { TransactionWrapper(Session.Save(entity)); } public void Create<T>(T entity) { TransactionWrapper(Session.Create(entity)); } public void Save<T>(T entity) { TransactionWrapper(Session.Save(entity)); } So in other words, I pass a method call as a parameter, and the TransactionWrapper method wraps a transaction around the method call.

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • Question about Transact SQL syntax

    - by Yousui
    Hi guys, The following code works like a charm: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; DECLARE @ErrorMessage NVARCHAR(4000), @ErrorSeverity int; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(); RAISERROR(@ErrorMessage, @ErrorSeverity, 1); END CATCH But this code gives an error: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1); END CATCH Why?

    Read the article

  • Is there an equivalent to RSpec's before(:all) in MiniTest?

    - by bergyman
    Since it now seems to have replaced TestUnit in 1.9.1, I can't seem to find an equivalent to this. There ARE times when you really just want a method to run once for the suite of tests. For now I've resorted to some lovely hackery along the lines of: Class ParseStandardWindTest < MiniTest::Unit::TestCase @@reader ||= PolicyDataReader.new(Time.now) @@data ||= @@reader.parse def test_stuff transaction = @@data[:transaction] assert true, transaction end end

    Read the article

  • how to choose which row to insert with same id in sql?

    - by user1429595
    so Basically I have a table called "table_1" : ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 16 pending 1:05 still in request 1 17 pending 1:10 still in request 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 20 pending 2:30 in progress 2 21 pending 2:35 in progess still 2 22 pending 2:40 still pending 2 23 complete 2:45 Transaction Compeleted I need to insert these data into my second table "table_2" where only start and compelete times are included, so my "table_2" should like this: ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 23 complete 2:45 Transaction Compeleted if anyone can help me write sql query for this I would highly appreciate it. Thanks in advance

    Read the article

  • SQL Query Theory Question...

    - by Keng
    I have a large historical transaction table (15-20 million rows MANY columns) and a table with one row one column. The table with one row contains a date (last processing date) which will be used to pull the data in the trasaction table ('process_date'). Question: Should I inner join the 'process_date' table to the transaction table or the transaction table to the 'process_date' table?

    Read the article

  • SQl Server error handling pattern

    - by Patrick Honorez
    Hi all. I am not an expert on SQl Server. Is this a valid pattern for handling errors in a batch of SELECT, INSERT...in SQl SERVER ? (I use v.2008) BEGIN TRANSACTION BEGIN TRY -- statement 1 -- statement 2 -- statement 3 COMMIT TRANSACTION END TRY BEGIN CATCH ROLLBACK TRANSACTION END CATCH Thanks

    Read the article

  • How to SUM columns on multiple conditions in a GROUP BY

    - by David Liddle
    I am trying to return a list of Accounts with their Balances, Outcome and Income Account Transaction ------- ----------- AccountID TransactionID BankName AccountID Locale Amount Status Here is what I currently have. Could someone explain where I am going wrong? select a.ACCOUNT_ID, a.BANK_NAME, a.LOCALE, a.STATUS, sum(t1.AMOUNT) as BALANCE, sum(t2.AMOUNT) as OUTCOME, sum(t3.AMOUNT) as INCOME from ACCOUNT a left join TRANSACTION t1 on t1.ACCOUNT_ID = a.ACCOUNT_ID left join TRANSACTION t2 on t1.ACCOUNT_ID = a.ACCOUNT_ID and t2.AMOUNT < 0 left join TRANSACTION t3 on t3.ACCOUNT_ID = a.ACCOUNT_ID and t3.AMOUNT > 0 group by a.ACCOUNT_ID, a.BANK_NAME, a.LOCALE, a.[STATUS]

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

  • best practises to delete a set of tables in sql 2008

    - by Hari
    Basically i want to keep the transaction very simple but i should be able to rollback if any error in the later part. Something like mentioned below, BEGIN TRANSACTION DELETE SET 1(this will delete first set of table) COMMIT DELETE SET 2 (will delete second set of table) If any error occurs while deleting set 2 i should be able to rollback set 1 transaction as well.Let me know if we have any options to do like this. Appreciate for your help.

    Read the article

  • NHibernate - Is ITransaction.Commit really necessary?

    - by user365383
    Hi I've just start studying NHibernate 2 days ago, and i'm looking for a CRUD method that i've writed based on an tutorial. My insert method is: using (ISession session = Contexto.OpenSession()) using (ITransaction transaction = session.BeginTransaction()) { session.Save(noticia); transaction.Commit(); session.Close(); } The complete code of "Contexto" is here: http://codepaste.net/mrnoo5 My question is: Do i really need to use ITransaction transaction = session.BeginTransaction() and transaction.Commit();? I'm asking this because i've tested run the web app without those two lines, and i've sucefully inserted new records. If possible, can someone explain me too the porpuse of Itransaction and the method Commit? Thanks

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • What is the most secure environment for multiple CMS sites? [closed]

    - by Brian Gulino
    I wish to run about 50 Joomla or WordPress low-traffic websites on 1 server, or part of a server. Each website will be managed by its own, naive owner who will have be able to access the Joomla or Wordpress backend of the website. I am concerned about security and isolation as my users will periodically get into trouble by not protecting their sites properly. Two alternatives I know of exist: Run one Linux system with multiple websites under Apache. Follow current Joomla and WordPress security tips. Increase the isolation of the individual sites by using mpm-itk, which will allow each website to run as its own user. The alternative to this is to run virtualization software such as the Xen hypervisor. Each site would have its own, virtual Linux system. I lack the experience needed to make this decision and I am asking which path to take. Obviously, there may be other alternatives that I haven't considered.

    Read the article

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >