Search Results

Search found 17616 results on 705 pages for 'uls log'.

Page 12/705 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • Different Apache log file for each subdomain?

    - by consolibyte
    Is it possible to configure Apache to use a different log file for each subdomain, even if all of the subdomains are within the same VirtualHost container? So, I have only one VirtualHost: <VirtualHost *:80> ServerName example.com ServerAlias test1.example.com test2.example.com test3.example.com </VirtualHost> And I want to end up with the following log files: /var/log/httpd-access_test1.log /var/log/httpd-access_test2.log /var/log/httpd-access_test3.log I know I can probably do this with a custom log format and split-logs, but I was wondering if there's a way to just have Apache do it for me.

    Read the article

  • Using rsyslog to create different log files for different processes

    - by user80203
    Scenario: I am running a cluster of machines. Each machine runs various python programs with a unique (across the cluster), but dynamically set, ID. Right now, they are all logging locally. So, I might have logs that look like: process_5.log process_6.log for processes that had ID's 5 and 6. Another machine may have: process_20.log process_25.log I wish to forward these logs to a logserver running rsyslogd. Python's logging facility has a nice syslog handler, so I understand how I could connect to the remote server. What I haven't figured out is how to use templating/DynFile to maintain log separation. e.g. on the logserver, I will want to see: process_5.log process_6.log process_20.log process_25.log which correspond to the logs of the same name on the sending machine. Is there a way to pull this off with rsyslogd templating?

    Read the article

  • How do I restore a SQL Server database from last night's full backup and the active transaction log file?

    - by Dylan Beattie
    I have been told that it's good practise to keep your SQL Server data files and log files on physically separate disks, because it'll allow you to recover your data to the point of failure if the data drive fails. So... let's say that mydata.mdf is on drive D:, and my mydata_log.ldf is on drive E:, and it's 16:45, and drive D: has just died horribly. So - I have last night's full backup (mydata.bak). I have hourly transaction-log backups that will bring the data back up to 16:00... but that means I'll lose 45 minutes worth of updates. I still have mydata_log.ldf on the E: drive, which should contain EVERY transaction that was committed right up to the point where the drive failed. How do I go about recreating the database and restoring data from the backup file and the live transaction log, so I don't lose any updates? Is this possible?

    Read the article

  • How to know or change the size of the Windows Event Log from a program under Windows XP? [closed]

    - by ahmd1
    I ran into a weird problem on a Windows XP system. My local service app logs its diagnostic messages into the Windows Event Log, so at some point those messages stopped being logged. I thought that the issue was in my code, but then I discovered that other processes can't log messages either. So I was wondering, is there a limit on the Windows Event Log size? PS. I guess I need to write this specifically -- I need to know/change the size from a command line or an API.

    Read the article

  • Umbraco Permissions Script - Secure Version

    - by Vizioz Limited
    Back in May I blogged about how to set Permissions for Umbraco using SetACL to set the appropriate directory permissions based on the installation recommendations.Recently I have been working on a site for a client who wanted every security item to be locked down as tightly as possible. And so I modified the script based on the Umbraco security best practices, I thought I'd share it with everyone, if I have missed anything, or if anyone has any suggestions on how to improve this, please let me know :)Please refer to my previous post regarding the SetAcl command line application that you will need.I suggest you save the following into a batch file called: umbPermSecure.batecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service the minimum rights requiredREM for Umbraco to workREM I suggest you update this script to also remove any users who do not need REM access to the web foldersREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM It is assumed that you have stored SetACL in a directory called, C:\SetACL ifREM not, you will need to modify the script.REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermSecure.bat C:\inetpub\umbracoroot@echo umbPermSecure.bat - Script to set Umbraco File and Directory Permissions@echo based on the Umbraco Security Best Practices Document (13th March 2009)@echo Published by Chris Houston - 19th October 2009@echo http://blog.vizioz.com@echo Adding READ only access SetACL.exe -on "%1" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ and EXECUTE access SetACL.exe -on "%1\app_code" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ, WRITE and MODIFY access SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\media" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\python" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"

    Read the article

  • 500 error but no info about the link GET / HTTP/1.1" 500 "-"

    - by Athanatos
    I am getting the following 500 in my access logs in rare occasions IP - - [05/Nov/2013:14:44:52 -0600] "-GET / HTTP/1.1" 500 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)" IP - - [05/Nov/2013:14:44:52 -0600] "GET / HTTP/1.1" 500 - "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)" However I cant see what page is throwing it so I was wondering how can I go about troubleshooting and find the page. Thanks

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ubuntu 12.04 not Locking Encrypted Hard Drive on Log Out

    - by J.L.
    I'm running Ubuntu 12.04 with Gnome 3.4. I have two external hard drives. I encrypted both using Ubuntu's Disk Utility. When I use Nautilus to mount them, I'm asked for my decryption password. Regardless of whether I then click "Forget password immediately" or "Remember password until you logout", though, I find that Ubuntu does not lock the drives when I log out. Rather, when I log back in, they're still mounted. (To be clear, restarting the computer does unmount them so that they require the password on the next log in.) I'm concerned that these drives are remaining unprotected when I log out without restarting my computer. I would be grateful for help understanding whether this is a bug. Thank you!

    Read the article

  • Shut down/log out, me menu absent in panel

    - by Chethan S.
    I upgraded my Lucid installation to Maverick today via an alternate CD. Everything went fine but the shut down icon which provides the options to log out, suspend, hibernate etc. in the top right corner of the panel and the me menu which allows to set chat status are absent! I tried searching for suitable options in 'Add to Panel' but cannot find the exact solution - shut down provides option only to shut down, log out only to log out and so on.

    Read the article

  • Log Shipping Between SQL Server Versions (perhaps 2005 to 2008)

    - by Greg Low
    One of the discussion lists that I participate in, had a brief discussion this morning about whether or not it's possible to perform log shipping between differernt versions of SQL Server. Specifically, can you do log shipping between SQL Server 2005 and SQL Server 2008? SQL Server does support restoring earlier version databases on later versions of the product. The databases get upgraded along the way. This also applies to transaction logs. So, you can set up log shipping between versions, however...(read more)

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Steps to add Log Shipping monitor into an existing SQL Server

    I have a requirement to add the Log Shipping Monitor for an existing installation. I have heard you can only complete this by rebuilding the Log Shipping infrastructure. Is that true? Are there any other options? In this tip I will explain how we can add the Log Shipping monitor to a SQL Server 2005, 2008, 2008 R2 or 2012 environment without rebuilding the Log Shipping installation. "It's the freaking iPhone of SQL monitoring""Everyone just gets it… that has tremendous value" - Rob Sullivan, DBA, IdeasRun. Get started with SQL Monitor today - download a free trial.

    Read the article

  • How to Truncate the Log File

    - by Derek Dieter
    Sometimes after one or more large transactions, the t-log (transaction log) will become full. In these particular cases you may receive an error message indicating the transaction log is full. In order to alleviate this issue, you need to find the names of the transaction logs on your system and then shrink them. To find the [...]

    Read the article

  • Check SQL Server Virtual Log Files Using PowerShell

    In a previous tip on Monitor Your SQL Server Virtual Log Files with Policy Based Management, we have seen how we can use Policy Based Management to monitor the number of virtual log files (VLFs) in our SQL Server databases. However, even with that most of the solutions I see online involve the creation of temporary tables and/or a combination of using cursors to get the total number of VLFs in a transaction log file. Is there a much easier solution?

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • How can I make Webalizer work with rolling apache logs?

    - by Simon
    I am using Webalizer to view my site stats, and it's working ok with one exception; I have log rolling configured so my log directory looks something like this: :/var/log/apache2$ ls access.log access.log.1 access.log.2.gz access.log.3.gz ... Webalizer ends up only storing the last ~4 days each time it runs, so I only ever get a rolling window of stats rather than the full month. How can I make Webalizer process the full set of logs?

    Read the article

  • How to delete Tomcat Access Log after n days?

    - by Andreas
    I only would like to keep the Access Logs of the last n days created by Tomcat Access Log Valve. http://tomcat.apache.org/tomcat-6.0-doc/config/valve.html#Access%20Log%20Valve But there seems to be no configuration-Attribute to define how long to keep the log-files? I guess this is because "Access Log Valve" only creates log files and doesn't delete them, is that correct?

    Read the article

  • Windows SteadyState - system's security log is full

    - by Matt
    Quick version: New computer, attached to Windows domain, with SteadyState w/ Disk Protection turned on, cannot log on as domain user because Windows states 'system security log is full' Troubleshooting performed: disabled all 'restrictions' listed in SteadyState, cleared system security log, changed security log settings to overwrite entries when it becomes full, restarted computer to commit changes, verified changes were commited - still cannot log on as domain user, changed Documents and Settings folder to another partition, still cannot log on as domain user Let me know if you need a more detailed description of any steps performed. I appreciate any help you can give me.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >