Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 16/795 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • Different Apache log file for each subdomain?

    - by consolibyte
    Is it possible to configure Apache to use a different log file for each subdomain, even if all of the subdomains are within the same VirtualHost container? So, I have only one VirtualHost: <VirtualHost *:80> ServerName example.com ServerAlias test1.example.com test2.example.com test3.example.com </VirtualHost> And I want to end up with the following log files: /var/log/httpd-access_test1.log /var/log/httpd-access_test2.log /var/log/httpd-access_test3.log I know I can probably do this with a custom log format and split-logs, but I was wondering if there's a way to just have Apache do it for me.

    Read the article

  • Using rsyslog to create different log files for different processes

    - by user80203
    Scenario: I am running a cluster of machines. Each machine runs various python programs with a unique (across the cluster), but dynamically set, ID. Right now, they are all logging locally. So, I might have logs that look like: process_5.log process_6.log for processes that had ID's 5 and 6. Another machine may have: process_20.log process_25.log I wish to forward these logs to a logserver running rsyslogd. Python's logging facility has a nice syslog handler, so I understand how I could connect to the remote server. What I haven't figured out is how to use templating/DynFile to maintain log separation. e.g. on the logserver, I will want to see: process_5.log process_6.log process_20.log process_25.log which correspond to the logs of the same name on the sending machine. Is there a way to pull this off with rsyslogd templating?

    Read the article

  • How do I restore a SQL Server database from last night's full backup and the active transaction log file?

    - by Dylan Beattie
    I have been told that it's good practise to keep your SQL Server data files and log files on physically separate disks, because it'll allow you to recover your data to the point of failure if the data drive fails. So... let's say that mydata.mdf is on drive D:, and my mydata_log.ldf is on drive E:, and it's 16:45, and drive D: has just died horribly. So - I have last night's full backup (mydata.bak). I have hourly transaction-log backups that will bring the data back up to 16:00... but that means I'll lose 45 minutes worth of updates. I still have mydata_log.ldf on the E: drive, which should contain EVERY transaction that was committed right up to the point where the drive failed. How do I go about recreating the database and restoring data from the backup file and the live transaction log, so I don't lose any updates? Is this possible?

    Read the article

  • How to know or change the size of the Windows Event Log from a program under Windows XP? [closed]

    - by ahmd1
    I ran into a weird problem on a Windows XP system. My local service app logs its diagnostic messages into the Windows Event Log, so at some point those messages stopped being logged. I thought that the issue was in my code, but then I discovered that other processes can't log messages either. So I was wondering, is there a limit on the Windows Event Log size? PS. I guess I need to write this specifically -- I need to know/change the size from a command line or an API.

    Read the article

  • Umbraco Permissions Script - Secure Version

    - by Vizioz Limited
    Back in May I blogged about how to set Permissions for Umbraco using SetACL to set the appropriate directory permissions based on the installation recommendations.Recently I have been working on a site for a client who wanted every security item to be locked down as tightly as possible. And so I modified the script based on the Umbraco security best practices, I thought I'd share it with everyone, if I have missed anything, or if anyone has any suggestions on how to improve this, please let me know :)Please refer to my previous post regarding the SetAcl command line application that you will need.I suggest you save the following into a batch file called: umbPermSecure.batecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service the minimum rights requiredREM for Umbraco to workREM I suggest you update this script to also remove any users who do not need REM access to the web foldersREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM It is assumed that you have stored SetACL in a directory called, C:\SetACL ifREM not, you will need to modify the script.REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermSecure.bat C:\inetpub\umbracoroot@echo umbPermSecure.bat - Script to set Umbraco File and Directory Permissions@echo based on the Umbraco Security Best Practices Document (13th March 2009)@echo Published by Chris Houston - 19th October 2009@echo http://blog.vizioz.com@echo Adding READ only access SetACL.exe -on "%1" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ and EXECUTE access SetACL.exe -on "%1\app_code" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ, WRITE and MODIFY access SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\media" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\python" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • Using git filter-branch to remove commits by their commit message

    - by machineghost
    In our repository we have a convention where every commit message starts with a certain pattern: Redmine #555: SOME_MESSAGE We also do a bit of rebasing to bring in the potential release branch's changes to a specific issue's branch. In other words, I might have branch "foo-555", but before I merge it in to branch "pre-release" I need to get any commits that pre-release has that foo-555 doesn't (so that foo-555 can fast-forward merge in to pre-release). However, because pre-release sometimes changes, we sometimes wind up with situations where you bring in a commit from pre-release, but then that commit later gets removed from pre-release. It's easy to identify commits that came from pre-release, because the number from their commit message won't match the branch number; for instance, if I see "Redmine #123: ..." in my foo-555 branch, I know that its not a commit from my branch. So now the question: I'd like to remove all of the commits that "don't belong" to a branch; in other words, any commit that: Is in my foo-555 branch, but not in the pre-release branch (pre-release..foo-555) Has a commit message that doesn't start with "Redmine #555" but of course "555" will vary from branch to branch. Is there any way to use filter-branch (or any other tool) to accomplish this? Currently the only way I can see to do it is to do go an interactive rebase ("git rebase -i") and manually remove all the "bad" commits.

    Read the article

  • Git: Can I commit my working directory to a new branch without commiting it to a current branch?

    - by Noli
    Somewhat new at Git.. I am working on a project, and had all of my tests passing on the master branch. I then made some changes, and when everything started failing, I realized that maybe I should have made those changes in a different branch. Is there I way I can commit the changes to a new branch without commiting them to my master branch, so that the master still has my passing tests?

    Read the article

  • Is it possible to have an inconsistent branch/tag with SVN due to concurrent commit action?

    - by maraspin
    I'm trying to understand whether subversion has its own mechanisms for regulating concurrent user activities on the trunk (IE a branch/tag action and a commit action happening at the same time) or if it's up to the users to sync between themselves before acting on the trunk. I've been trying to find documentation about this on the net but haven't been able to come up with something, so I appreciate if someone can enlighten me on the topic. Thank you in advance!

    Read the article

  • 500 error but no info about the link GET / HTTP/1.1" 500 "-"

    - by Athanatos
    I am getting the following 500 in my access logs in rare occasions IP - - [05/Nov/2013:14:44:52 -0600] "-GET / HTTP/1.1" 500 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)" IP - - [05/Nov/2013:14:44:52 -0600] "GET / HTTP/1.1" 500 - "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)" However I cant see what page is throwing it so I was wondering how can I go about troubleshooting and find the page. Thanks

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ubuntu 12.04 not Locking Encrypted Hard Drive on Log Out

    - by J.L.
    I'm running Ubuntu 12.04 with Gnome 3.4. I have two external hard drives. I encrypted both using Ubuntu's Disk Utility. When I use Nautilus to mount them, I'm asked for my decryption password. Regardless of whether I then click "Forget password immediately" or "Remember password until you logout", though, I find that Ubuntu does not lock the drives when I log out. Rather, when I log back in, they're still mounted. (To be clear, restarting the computer does unmount them so that they require the password on the next log in.) I'm concerned that these drives are remaining unprotected when I log out without restarting my computer. I would be grateful for help understanding whether this is a bug. Thank you!

    Read the article

  • Shut down/log out, me menu absent in panel

    - by Chethan S.
    I upgraded my Lucid installation to Maverick today via an alternate CD. Everything went fine but the shut down icon which provides the options to log out, suspend, hibernate etc. in the top right corner of the panel and the me menu which allows to set chat status are absent! I tried searching for suitable options in 'Add to Panel' but cannot find the exact solution - shut down provides option only to shut down, log out only to log out and so on.

    Read the article

  • Log Shipping Between SQL Server Versions (perhaps 2005 to 2008)

    - by Greg Low
    One of the discussion lists that I participate in, had a brief discussion this morning about whether or not it's possible to perform log shipping between differernt versions of SQL Server. Specifically, can you do log shipping between SQL Server 2005 and SQL Server 2008? SQL Server does support restoring earlier version databases on later versions of the product. The databases get upgraded along the way. This also applies to transaction logs. So, you can set up log shipping between versions, however...(read more)

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Steps to add Log Shipping monitor into an existing SQL Server

    I have a requirement to add the Log Shipping Monitor for an existing installation. I have heard you can only complete this by rebuilding the Log Shipping infrastructure. Is that true? Are there any other options? In this tip I will explain how we can add the Log Shipping monitor to a SQL Server 2005, 2008, 2008 R2 or 2012 environment without rebuilding the Log Shipping installation. "It's the freaking iPhone of SQL monitoring""Everyone just gets it… that has tremendous value" - Rob Sullivan, DBA, IdeasRun. Get started with SQL Monitor today - download a free trial.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >