Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 17/780 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • INSERT and transaction searilization in PostreSQL

    - by Alexander
    Hello! I have a question. Transaction isolation level set to serializable. When the one user open transaction and INSERT or UPDATE data in "table1" and then another user open transaction and try to INSERT data to the same table is second user need to wait 'til the first user commits the transaction?

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC network access on)

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); }

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • NHibernate Session per Call in WCF - How to Rollback

    - by Corey Coogan
    I've implemented some components to use WCF with both an IoC Container (StructureMap) and the Session per Call pattern. The NHibernate stuff is most taken from here: http://realfiction.net/Content/Entry/133. It seems to be OK, but I want to open a transaction with each call and commit at the end, rather than just Flush() which how its being done in the article. Here's where I am running into some problems and could use some advice. I haven't figured out a good way to rollback. I realize I can check the CommunicationState and if there's an exception, rollback, like so: public void Detach(InstanceContext owner) { if (Session != null) { try { if(owner.State == CommunicationState.Faulted) RollbackTransaction(); else CommitTransaction(); } finally { Session.Dispose(); } } } void CommitTransaction() { if(Session.Transaction != null && Session.Transaction.IsActive) Session.Transaction.Commit(); } void RollbackTransaction() { if (Session.Transaction != null && Session.Transaction.IsActive) Session.Transaction.Rollback(); } However, I almost never return a faulted state from a service call. I would typically handle the exception and return an appropriate indicator on my response object and rollback the transaction myself. The only way I can think of handling this would be to inject not only repositories into my WCF services, but also an ISession so I can rollback and handle the way I want. That doesn't sit well with me and seems kind of leaky. Anyone else handling the same problem?

    Read the article

  • How can I make Webalizer work with rolling apache logs?

    - by Simon
    I am using Webalizer to view my site stats, and it's working ok with one exception; I have log rolling configured so my log directory looks something like this: :/var/log/apache2$ ls access.log access.log.1 access.log.2.gz access.log.3.gz ... Webalizer ends up only storing the last ~4 days each time it runs, so I only ever get a rolling window of stats rather than the full month. How can I make Webalizer process the full set of logs?

    Read the article

  • How to delete Tomcat Access Log after n days?

    - by Andreas
    I only would like to keep the Access Logs of the last n days created by Tomcat Access Log Valve. http://tomcat.apache.org/tomcat-6.0-doc/config/valve.html#Access%20Log%20Valve But there seems to be no configuration-Attribute to define how long to keep the log-files? I guess this is because "Access Log Valve" only creates log files and doesn't delete them, is that correct?

    Read the article

  • How can I have 2 ADO access methods use the same Transaction?

    - by KevinDeus
    I'm writing a test to see if my LINQ to Entity statement works.. I'll be using this for others if I can get this concept going.. my intention here is to INSERT a record with ADO, then verify it can be queried with LINQ, and then ROLLBACK the whole thing at the end. I'm using ADO to insert because I don't want to use the object or the entity model that I am testing. I figure that a plain ADO INSERT should do fine. problem is.. they both use different types of connections. is it possible to have these 2 different data access methods use the same TRANSACTION so I can roll it back?? _conn = new SqlConnection(_connectionString); _conn.Open(); _trans = _conn.BeginTransaction(); var x = new SqlCommand("INSERT INTO Table1(ID, LastName, FirstName, DateOfBirth) values('127', 'test2', 'user', '2-12-1939');", _conn); x.ExecuteNonQuery(); //So far, so good. Adding a record to the table. //at this point, we need to do **_trans.Commit()** here because our Entity code can't use the same connection. Then I have to manually delete in the TestHarness.TearDown.. I'd like to eliminate this step //(this code is in another object, I'll include it for brevity. Imagine that I passed the connection in) //check to see if it is there using (var ctx = new XEntities(_conn)) //can't do this.. _conn is not an EntityConnection! { var retVal = (from m in ctx.Table1 where m.first_name == "test2" where m.last_name == "user" where m.Date_of_Birth == "2-12-1939" where m.ID == 127 select m).FirstOrDefault(); return (retVal != null); } //Do test.. Assert.BlahBlah(); _trans.Rollback();

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • Windows SteadyState - system's security log is full

    - by Matt
    Quick version: New computer, attached to Windows domain, with SteadyState w/ Disk Protection turned on, cannot log on as domain user because Windows states 'system security log is full' Troubleshooting performed: disabled all 'restrictions' listed in SteadyState, cleared system security log, changed security log settings to overwrite entries when it becomes full, restarted computer to commit changes, verified changes were commited - still cannot log on as domain user, changed Documents and Settings folder to another partition, still cannot log on as domain user Let me know if you need a more detailed description of any steps performed. I appreciate any help you can give me.

    Read the article

  • WT-NMP - PHP-CGI randomly stops running with no error log

    - by alexfontaine
    We have recently installed WT-NMP and are currently running Php-Cgi with php 5.4.24. We are running fairly simple php scripts and when testing everything is running fine. Over the weekend we wanted to keep the server running test it over a longer period of time. The server and scripts ran fine all day on Friday, but sometime late on Saturday, the php-cgi stopped running. There are no errors in the error log (C:\WT-NMP\log). In the configuration (php.ini) I have the following options set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On html_errors = On error_log = "c:/wt-nmp/log/php_error.log" We also have the standard nginx.conf error logs: access_log "c:/wt-nmp/log/nginx_access.log"; error_log "c:/wt-nmp/log/nginx_error.log" warn; So, since the log directory is empty, I am assuming that the running php scripts and general nginx operations are not causing the php-cgi to stop. So my questions are: What else could cause the php-cgi to stop running? Are there any other options for logging that we could turn on that could help us track this down? Are there other log locations that we should be looking at? Thanks!

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • SQL 2005 Log Shipping - Was working, now isnt!

    - by Jim
    Hello, I had log shipping working between two SQL 2005 server fine. I suspect that a job was added to the source server which backed up the transaction log to disk (nothing to do with the existing log shipping job). As I understand it, if you do this then log shipping will fail to work. Sure enough, it no longer works. I've deleted the job which had just been created. Log shipping still does not work. I've rebooted both servers and, again, Log shipping does not work. I'm at a loss now... all I get is the folloing error: The log shipping secondary database XXXXXXXXXX has restore threshold of 45 minutes and is out of sync. No restore was performed for 5882 minutes. Restored latency is 15 minutes. Check agent log and logshipping monitor information. Any help appreciated! Thanks in advance.

    Read the article

  • Under what circumstances is an SqlConnection automatically enlisted in an ambient TransactionScope T

    - by Triynko
    What does it mean for an SqlConnection to be "enlisted" in a transaction? Does it simply mean that commands I execute on the connection will participate in the transaction? If so, under what circumstances is an SqlConnection automatically enlisted in an ambient TransactionScope Transaction? See questions in code comments. My guess to each question's answer follows each question in parenthesis. Scenario 1: Opening connections INSIDE a transaction scope using (TransactionScope scope = new TransactionScope()) using (SqlConnection conn = ConnectToDB()) { // Q1: Is connection automatically enlisted in transaction? (Yes?) // // Q2: If I open (and run commands on) a second connection now, // with an identical connection string, // what, if any, is the relationship of this second connection to the first? // // Q3: Will this second connection's automatic enlistment // in the current transaction scope cause the transaction to be // escalated to a distributed transaction? (Yes?) } Scenario 2: Using connections INSIDE a transaction scope that were opened OUTSIDE of it //Assume no ambient transaction active now SqlConnection new_or_existing_connection = ConnectToDB(); //or passed in as method parameter using (TransactionScope scope = new TransactionScope()) { // Connection was opened before transaction scope was created // Q4: If I start executing commands on the connection now, // will it automatically become enlisted in the current transaction scope? (No?) // // Q5: If not enlisted, will commands I execute on the connection now // participate in the ambient transaction? (No?) // // Q6: If commands on this connection are // not participating in the current transaction, will they be committed // even if rollback the current transaction scope? (Yes?) // // If my thoughts are correct, all of the above is disturbing, // because it would look like I'm executing commands // in a transaction scope, when in fact I'm not at all, // until I do the following... // // Now enlisting existing connection in current transaction conn.EnlistTransaction( Transaction.Current ); // // Q7: Does the above method explicitly enlist the pre-existing connection // in the current ambient transaction, so that commands I // execute on the connection now participate in the // ambient transaction? (Yes?) // // Q8: If the existing connection was already enlisted in a transaction // when I called the above method, what would happen? Might an error be thrown? (Probably?) // // Q9: If the existing connection was already enlisted in a transaction // and I did NOT call the above method to enlist it, would any commands // I execute on it participate in it's existing transaction rather than // the current transaction scope. (Yes?) }

    Read the article

  • Can I ask Postgresql to ignore errors within a transaction

    - by fmark
    I use Postgresql with the PostGIS extensions for ad-hoc spatial analysis. I generally construct and issue SQL queries by hand from within psql. I always wrap an analysis session within a transaction, so if I issue a destructive query I can roll it back. However, when I issue a query that contains an error, it cancels the transaction. Any further queries elicit the following warning: ERROR: current transaction is aborted, commands ignored until end of transaction block Is there a way I can turn this behaviour off? It is tiresome to rollback the transaction and rerun previous queries every time I make a typo.

    Read the article

  • rsync option --log-file seems is not working

    - by user1017735
    I am using the --log-file option in rsync to see the logs. But when I tried to run, it says: --log-file unrecognized option Here is my command: #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni could some one help me with the right syntax ? I tried these options also. #/usr/bin/rsync -av -u --log-file /sreeni/log.txt --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni and #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni

    Read the article

  • Transaction & Locks Problem

    - by jay
    with in do transaction, i defined a label and in this label i am accessing a table with exclusive-lock.and at the end of label i have done all the changes in that table. bt now i am with in transaction block. Now, i tried to access that same table in another session.then it show an error, Table used by another user. So is it possible that, can we release teh table with in transaction,so another user can access it. Exe Session1) DO TRANSACTION: loopb: repeat: -- --------------------- control is here right now. end. /repeat/ -- end /do transaction/ Session2) I tried to access same table,bt it show an error,that table locked by another user.

    Read the article

  • Transaction & Locks Problem

    - by jay
    with in do transaction, i defined a label and in this label i am accessing a table with exclusive-lock.and at the end of label i have done all the changes in that table. bt now i am with in transaction block. Now, i tried to access that same table in another session.then it show an error, Table used by another user. So is it possible that, can we release teh table with in transaction,so another user can access it. Exe Session1) DO TRANSACTION: loopb: repeat: -- --------------------- control is here right now. end. /repeat/ -- end /do transaction/ Session2) I tried to access same table,bt it show an error,that table locked by another user.

    Read the article

  • Direct web URL to PayPal transaction

    - by tags2k
    Having implemented PayPal's Website Payments Standard, I'd like to link to the details view of a transaction from my site's back end - just a simple direct web URL to the PayPal side. I don't know why this is tricky but when I try to get it from being logged in to the PayPal system it seems very obfuscated, in this form: history.paypal.com/uk/cgi-bin/webscr?cmd=_history-details&info=[looks like some kind of GUID]&ptype=4&history_cache=[huge encoded string] I'm guessing it's by design but it's not very helpful if you want a quick way to jump to a transaction's details. I've tried the https://www.paypal.com/vst/id=1234 form (also with co.uk as I am UK-based) recommended on a few sites I saw in my search, but I am told that: The transaction ID in your link is invalid. This happens even when copying the transaction ID directly from PayPal's back-end order listing. Is there a reliable way to directly link to an order / transaction details page in PayPal?

    Read the article

  • Log Problem and bash script

    - by GvWorker
    Hello Guys, I have 11 Debian servers running on rackspace cloud hosting. All running VHCS2 for hosting management. 1 server is used for application and 10 are used for only smtp. My question is regarding smtp servers. Each server hosted 1 domain. My problem is when my client use smtp there's a log created in this directory /var/log/ but within 24 hours drives are full and server refuse all smtp connections. Then I deleted the logs and ran following command to check the disk space. df -h but it shows hdd still full and server is still refusing the smtp connections. Then I ran following command to see the truth du --max-depth=1 -h It shows the truth. The real disk space used. Then I rebooted the server and now server working fine. But after few hours same situation happened. Then I created the following script. #!/bin/sh rm -fr /var/log/* rm -fr /var/log/apache2/*.log rm -fr /var/log/apache2/*.log.* rm -fr /var/log/apache2/users/* rm -fr /var/log/apache2/backup/* reboot It worked for days but after that logs are again filling the hdd. Now I want the following solutions. If anybody can help me. When I delete files from server hdd will free up without rebooting Log should be in specific range. Like a specific size of file where old data overwrite with new data

    Read the article

  • With NHibernate and Transaction do I rollback on commit failure or does it auto rollback on single c

    - by mattcodes
    I've built the following Dispose method for my Unit Of Work which essentially wraps the active NH session & transaction (transaction set as variable after opening session as to not be replaced if NH session gets new transaction after error) public void Dispose() { Func<ITransaction,bool> transactionStateOkayFunc = trans => trans != null && trans.IsActive && !trans.WasRolledBack; try { if(transactionStateOkayFunc(this.transaction)) { if (HasErrored) { transaction.Rollback(); } else { try { transaction.Commit(); } catch (Exception) { if(transactionStateOkayFunc(transaction)) transaction.Rollback(); throw; } } } } finally { if(transaction != null) transaction.Dispose(); if(session.IsOpen) session.Close(); } I can't help feeling that code is a little bloated, will a transaction automatically rollback is a discrete Commit fails in the case of non-nested transactions? Will Commit or Rollback automatically Dipose the transaction? If not will Session.Close() automatically dispose the associated transaction?

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >