Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 5/780 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Execute Statements in a Transaction - Sql Server 2005

    - by Shrewd Demon
    hi, i need to update a database where some of the table have changed (columns have been added). I want to perform this action in a proper transaction. If the code executes without any problem then i will commit the changes otherwise i will rollback the database back to its original state. I want to do something like this: BEGIN TRANSACTION ...Execute some sql statements here COMMIT TRANSACTION (When every thing goes well) ROLLBACK TRANSACTION (When something goes wrong) Please tell me what is the best way to do this i know there is a @@TranCount variable but dont know its exact purpose. Thanks.

    Read the article

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • Which logfile(s) log(s) errors reading a cd?

    - by Robert Vila
    I introduce a CD-RW, maybe blank (I really don't know), and after a little movement inside the reader, the CD is ejected without any message at all. I would like to know what is going on and the reason why it is ejected. How can I know that in Natty. The CD reader is working OK because I can read other CD's. It only gave me problems writing from Natty, a few days ago, but with MacOS there was no problem. Thank you Edit: Maybe there is no error, but then, how can I know what is in the Cd if there is anything?

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • transaction log shipping sql server 2005 to 2008

    - by Andrew Jahn
    I have a reporting setup with SSRS on our sql server 2005 database. Because sql server 2008 is not supported by the main program which populates our database we are stuck with 2005 on our prod database. Unfortunately when I run our weekly check reports the web interface constantly times out because the server cant do the conversion to PDF. I've read that sql server 2008's SSRS is ALOT better with memory management. I was wondering if I can do some kind transact log shipping subscription publication from 2005 to 2008? Am I chasing a dream here. Currently I have to open up the ssrs project in visual studio and run the reports inside because it doesn't ever time out when doing the pdf conversion, only times out if I try to run it through the ssis web interface.

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • Transaction within a Transaction in C#

    - by Rosco
    I'm importing a flat file of invoices into a database using C#. I'm using the TransactionScope to roll back the entire operation if a problem is encountered. It is a tricky input file, in that one row does not necessary equal one record. It also includes linked records. An invoice would have a header line, line items, and then a total line. Some of the invoices will need to be skipped, but I may not know it needs to be skipped until I reach the total line. One strategy is to store the header, line items, and total line in memory, and save everything once the total line is reached. I'm pursuing that now. However, I was wondering if it could be done a different way. Creating a "nested" transaction around the invoice, inserting the header row, and line items, then updating the invoice when the total line is reached. This "nested" transaction would roll back if it is determined the invoice needs to be skipped, but the overall transaction would continue. Is this possible, practical, and how would you set this up?

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • What is the reason of "Transaction context in use by another session"

    - by Shrike
    Hi, I'm looking for a description of the root of this error: "Transaction context in use by another session". I get it sometimes in one of my unittests so I can't provider repro code. But I wonder what is "by design" reason for the error. I've found this post: http://blogs.msdn.com/asiatech/archive/2009/08/10/system-transaction-may-fail-in-multiple-thread-environment.aspx and also that: http://msdn.microsoft.com/en-us/library/ff649002.aspx But I can't understand what "Multiple threads sharing the same transaction in a transaction scope will cause the following exception: 'Transaction context in use by another session.' " means. All words are understandable but not the point. I actually can share a system transaction between threads. And there is even special mechanism for this - DependentTransaction class and Transaction.DependentClone method. I'm trying to reproduce a usecase from the first post: 1. Main thread creates DTC transaction, receives DependentTransaction (created using Transaction.Current.DependentClone on the main thread 2. Child thread 1 enlists in this DTC transaction by creating a transaction scope based on the dependent transaction (passed via constructor) 3. Child thread 1 opens a connection 4. Child thread 2 enlists in DTC transaction by creating a transaction scope based on the dependent transaction (passed via constructor) 5. Child thread 2 opens a connection with such code: using System; using System.Threading; using System.Transactions; using System.Data; using System.Data.SqlClient; public class Program { private static string ConnectionString = "Initial Catalog=DB;Data Source=.;User ID=user;PWD=pwd;"; public static void Main() { int MAX = 100; for(int i =0; i< MAX;i++) { using(var ctx = new TransactionScope()) { var tx = Transaction.Current; // make the transaction distributed using (SqlConnection con1 = new SqlConnection(ConnectionString)) using (SqlConnection con2 = new SqlConnection(ConnectionString)) { con1.Open(); con2.Open(); } showSysTranStatus(); DependentTransaction dtx = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete); Thread t1 = new Thread(o => workCallback(dtx)); Thread t2 = new Thread(o => workCallback(dtx)); t1.Start(); t2.Start(); t1.Join(); t2.Join(); ctx.Complete(); } trace("root transaction completes"); } } private static void workCallback(DependentTransaction dtx) { using(var txScope1 = new TransactionScope(dtx)) { using (SqlConnection con2 = new SqlConnection(ConnectionString)) { con2.Open(); trace("connection opened"); showDbTranStatus(con2); } txScope1.Complete(); } trace("dependant tran completes"); } private static void trace(string msg) { Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " : " + msg); } private static void showSysTranStatus() { string msg; if (Transaction.Current != null) msg = Transaction.Current.TransactionInformation.DistributedIdentifier.ToString(); else msg = "no sys tran"; trace( msg ); } private static void showDbTranStatus(SqlConnection con) { var cmd = con.CreateCommand(); cmd.CommandText = "SELECT 1"; var c = cmd.ExecuteScalar(); trace("@@TRANCOUNT = " + c); } } It fails on Complete's call of root TransactionScope. But error is different: Unhandled Exception: System.Transactions.TransactionInDoubtException: The transaction is in doubt. --- pired. The timeout period elapsed prior to completion of the operation or the server is not responding. To sum up: I want to understand what "Transaction context in use by another session" means and how to reproduce it.

    Read the article

  • EJB3 - using 2 persistence units within a transaction (Exception: Local transaction already has 1 no

    - by Sorcha
    I am trying to use 2 persistence units within the same transaction in a JEE application deployed on Glassfish. The 2 persistence units are defined in persistence.xml, as follows: <persistence-unit name="BeachWater"> <jta-data-source>jdbc/BeachWater</jta-data-source> ... <persistence-unit name="LIMS"> <jta-data-source>jdbc/BeachWaterLIMS</jta-data-source> ... These persistence units correspond to JDBC resources and connection pools which I had defined in Glassfish as follows (include one here as both are identical apart from names & database connection info): JDBC Resource: JNDI Name: jdbc/BeachWaterLIMS Pool Name: BEACHWATER_LIMS Connection Pool: Name: BEACHWATER_LIMS Datasource Classname: com.microsoft.sqlserver.jdbc.SQLServerConnectionPoolDataSource Resource Type: javax.sql.ConnectionPoolDataSource There are 3 stateless session beans, LimsServiceBean, AnalysisServiceBean and AnalysisDataTransformationServiceBean. Here are the relevant snippets from LimsServiceBean: @PersistenceContext(unitName = "LIMS") EntityManager em; ... public ArrayList<Sample> getLatestLIMSData() { Query q = em.createNamedQuery("Sample.findBySubTypeStatus"); return new ArrayList<Sample>(q.getResultList()); } From AnalysisServiceBean: @PersistenceContext(unitName = "BeachWater") EntityManager em; ... public ArrayList<AnalysisType> getAllAnalysisTypes() { Query q = em.createNamedQuery("AnalysisType.findAll"); return new ArrayList<AnalysisType>(q.getResultList()); } And from AnalysisDataTransformationServiceBean: @EJB private AnalysisService analysisService; @EJB private LimsService limsService; public void transformData() { List<AnalysisType> analysisTypes = analysisService.getAllAnalysisTypes(); ArrayList<Sample> samples = limsService.getLatestLIMSData(); This call to limsService.getLatestLIMSData() caused the following exception: [exec] Caused by: javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean; nested exception is: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))): oracle.toplink.essentials.exceptions.DatabaseException [exec] Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: java.lang.IllegalStateException: Local transaction already has 1 non-XA Resource: cannot add more resources. Having consulted this page, http://msdn.microsoft.com/en-us/library/ms378484.aspx (among many others), I tried changing the definition of the connection pools to: Connection Pool: Name: BEACHWATER_LIMS Datasource Classname: com.microsoft.sqlserver.jdbc.SQLServerXADataSource Resource Type: javax.sql.XADataSource Ping via the Glassfish admin console succeeds, but call to analysisService.getAllAnalysisTypes() now throws an exception: Caused by: javax.ejb.TransactionRolledbackLocalException: Exception thrown from bean; nested exception is: Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.1 (Build b60e-fcs (12/23/2008))): oracle.toplink.essentials.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Error in allocating a connection. Cause: javax.transaction.SystemException Any ideas?

    Read the article

  • SQLite Transaction fills a table BEFORE the transaction is commited

    - by user1500403
    Halo I have a code that creates a datatable (in memory) from a select SQL statement. However I realised that this datatable is filling during the procedure rather as a result of the transaction comit statment, it does the job but its slow. WHat amI doing wrong ? Inalready.Clear() 'clears a dictionary Using connection As New SQLite.SQLiteConnection(conectionString) connection.Open() Dim sqliteTran As SQLite.SQLiteTransaction = connection.BeginTransaction() Try oMainQueryR = "SELECT * FROM detailstable Where name= :name AND Breed= :Breed" Dim cmdSQLite As SQLite.SQLiteCommand = connection.CreateCommand() Dim oAdapter As New SQLite.SQLiteDataAdapter(cmdSQLite) With cmdSQLite .CommandType = CommandType.Text .CommandText = oMainQueryR .Parameters.Add(":name", SqlDbType.VarChar) .Parameters.Add(":Breed", SqlDbType.VarChar) End With Dim c As Long = 0 For Each row As DataRow In list.Rows 'this is the list with 500 names If Inalready.ContainsKey(row.Item("name")) Then Else c = c + 1 Form1.TextBox1.Text = " Fill .... " & c Application.DoEvents() Inalready.Add(row.Item("name"), row.Item("Breed")) cmdSQLite.Parameters(":name").Value = row.Item("name") cmdSQLite.Parameters(":Breed").Value = row.Item("Breed") oAdapter.Fill(newdetailstable) End If Next oAdapter.FillSchema(newdetailstable, SchemaType.Source) Dim z = newdetailstable.Rows.Count 'At this point the newdetailstable is already filled up and I havent even comited the transaction ' sqliteTran.Commit() Catch ex As Exception End Try End Using

    Read the article

  • Unable to log into samba server

    - by Paddington
    I am unable to log into a samba server (running on fedora core 6) as it prompts for a username and password when I try to connect to the mapped drives from my windows 7 machine. I decided to reset the password using the command smbpassword paddy and when I list users using check the pdbedit -L -v I see that the password was updated at the time I made this change. However, I am still unable to log in. The log file in /var/log/samba/log.paddy shows: [2012/10/11 09:55:54.605923, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED [2012/10/11 09:55:54.606635, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED How can I resolve this so that I can log in?

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • Transaction within transaction

    - by user281521
    Hello, I want to know if open a transaction inside another is safe and encouraged? I have a method: def foo(): session.begin try: stuffs except Exception, e: session.rollback() raise e session.commit() and a method that calls the first one, inside a transaction: def bar(): stuffs try: foo() #<<<< there it is :) stuffs except Exception, e: session.rollback() raise e session.commit() if I get and exception on the foo method, all the operations will be rolled back? and everything else will work just fine? thanks!!

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • SQLite transaction doesn't work as expected

    - by troll
    I prepared 2 files, "1.php" and "2.php". "1.php" is like this. <?php $dbh = new PDO('sqlite:test1'); $dbh->beginTransaction(); print "aaa<br>"; sleep(55); $dbh->commit(); print "bbb"; ?> and "2.php" is like this. <?php $dbh = new PDO('sqlite:test1'); $dbh->beginTransaction(); print "ccc<br>"; $dbh->commit(); print "ddd"; ?> and I excute "1.php". It starts a transaction and waits 55 seconds. So when I immediately excute "2.php", my expectation is this: "1.php" is getting transaction and "1" holds a database lock "2" can not begin a transaction "2" can not get database lock so "2" have to wait 55 seconds BUT, but the test went another way. When I excute "2",then "2" immediately returned it's result "2" did not wait so I have to think that "1" could not get transaction, or could not get database lock. Can anyone help?

    Read the article

  • How does rolling back an application level transaction interact with SqlDataAdapter events in ADO.NE

    - by ilasno
    When utilizing the RowUpdated event in the SqlAdapter class, i'm assuming that it is raised directly following the return of the database interaction that executes the update. If that update is part of an application level transaction (utilizing the SqlTransaction class) which is then rolled back, does this affect or interact at all with the RowUpdated event? Or is the RowUpdated event not raised until after the transaction is committed (this seems unlikely, but i couldn't find documentation)? If RowUpdated has already been raised, and then the transaction is rolled back, any good ideas on how to adjust something that may have been done in RowUpdated that should then, also be rolled back?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >