Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 54/771 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • OpeVPN log connecting client IPs

    - by TossUser
    I looking for the best solution to log all connecting client's ip to either a text file or a database who logs into my VPN server. Under the IP I mean the public WAN IP on the internet where they are connecting from. A hack could definitely be to make the openvpn server log to a separate logfile and run logtail periodically to extract the necessary information. So the database I want to build would look like: Client_Name | Client_IP | Connection_date roadwarr1 | 72.84.99.11 | 03/04/14 - 22:44:00 Sat Please don't recommend me to use the commercial Openvpn Access Server. That's not a real solution here. If the disconnection date could be determined that would be even better so I could see how long a client was connected and from where! Thank you

    Read the article

  • sa2 -A /var/log/sa/sa13: No such file or directory

    - by user53925
    I have systat version 7.0.2 and the /etc/sysconfig/sysstat has the entry HISTORY=27, this is on a redhat enterprise server 5.6, the cron setup for this is # run system activity accounting tool every minute * * * * * root /usr/lib64/sa/sa1 1 1 # generate a daily summary of process accounting at 23:53 53 23 * * * root /usr/lib64/sa/sa2 -A I get the following error from the cron sa2 -A find: /var/log/sa/sa13: No such file or directory, Looking at the directory /var/log/sa the files are created from sa01 through sa10 (sa1 created on sep1, sa2 created on sep2 and so on), then the rest of the files are from sa14 through to sa 31 (created from Aug 14 to Aug 31). I have not made any changes on the server so I am not sure why I am getting these error messages and is there a way to fix this?. Someone suggested creating empty files from sa11 through sa14 to fix this but I am not sure if this might mess up something .

    Read the article

  • Automate Monitor string in different log files

    - by EVIA
    I have few log files in different servers and I want to check output in the end of those log files for e.g . success: 4000 failed: 200 These logs files are getting generated daily and I have to keep track of these numbers. If there is any way I can automate this option instead of going and checking these files and wasting so much of my time. I want to create some kind of script like Go to \serverA\C$\log_07_02_2012.txt and check this line Go to \serverB\C$\log_07_02_2012.txt and check some other line. .... and it should give me output from all of these...

    Read the article

  • can rsyslog transfer files present in a directory

    - by Tarun
    I have configured rsyslog and its working fine as its intended to be this is the conf files: Server Side: $template OTHERS,"/rsyslog/test/log/%fromhost-ip%/others-log.log" $template APACHEACCESS,"/rsyslog/test/log/%fromhost-ip%/apache-access.log" $template APACHEERROR,"/rsyslog/test/log/%fromhost-ip%/apache-error.log" if $programname == 'apache-access' then ?APACHEACCESS & ~ if $programname == 'apache-error' then ?APACHEERROR & ~ *.* ?OTHERS Client Side: # Apache default access file: $ModLoad imfile $InputFileName /var/log/apache2/access.log $InputFileTag apache-access: $InputFileStateFile stat-apache-access $InputFileSeverity info $InputRunFileMonitor #Apache default Error file: $ModLoad imfile $InputFileName /var/log/apache2/error.log $InputFileTag apache-error: $InputFileStateFile stat-apache-error $InputFileSeverity error $InputRunFileMonitor if $programname == 'apache-access' then @10.134.125.179:514 & ~ if $programname == 'apache-error' then @10.134.125.179:514 & ~ *.* @10.134.125.179:514 Now in rsyslog can I instead of defining separate files can I give the complete directory so that the client sends all the log files automatically present in the directory /var/log/apache2 and on syslog server side these files gets automatically stored in different filenames?

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • Finding the most common errors in event logs using Powershell.

    - by Paul
    I have the event logs for one of our servers locally in .evtx format. I can load the log file into PS using the command: Get-WinEvent -Path D:\Desktop\serverlogs.evtx What I would like to do is on the Message field group events where the text matches by a certain percent (say 80% the same). As we have stacktraces for errors in the details which will be the same, but we also log the client's IP, url that was accessed which will likely be different. I want to group them so that I can work out the most common errors to prioritize fixing them and as there are 25,000+ errors in the log file I would rather not do it manually. I think I can work out how to do most of this, but am not sure how I could do the 'group fields which are mostly the same' part, does powershell have anything like this built in?

    Read the article

  • how can i get list of created and deleted files on the server?

    - by max
    i have a image sharing website , users log and upload image last night i've lost about 30 newly-consecutive uploaded images ... i mean they have been uploaded ... apparently ... they are in the database but the actual image on the server is gone ! error log doesn't show anything ... so i thought my best option is to check list of created and deleted files ... if there is any ! is there a log file for created and deleted files on the server ? i'm using directadmin

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • log file is not getting created using JDK logging with Commons-logging

    - by Saida Dhanavath
    When I run the TestJcLLoggingService class log messages are coming to Console but no log file is created, please help me if you know the answer. two source files are pasted below. TestJcLLoggingService.java package com.amadeus.psp.pasd.logging; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.stereotype.Service; @Service public class TestJCLLoggingService { private static Log psp_log = LogFactory.getLog(TestJCLLoggingService.class); public static String testJCLLoggingServiceMethod(){ psp_log.info("start of method testJCLLoggingServiceMethod class TestJCLLoggingService"); psp_log.info("start of method testJCLLoggingServiceMethod class TestJCLLoggingService"); return "This is a test string for JCLLogging"; } public static void main(String[] args){ testJCLLoggingServiceMethod(); } } logging.properties handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level = ALL com.amadeus.psp.pasd.level=ALL java.util.logging.ConsoleHandler.level = ALL java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.level=ALL java.util.logging.FileHandler.limit=50000 java.util.logging.FileHandler.count=1 java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter java.util.logging.FileHandler.append=true Thanks in advance.

    Read the article

  • How to force InnoSetup to create an uninstall log file

    - by mkva
    Hi I'm using InnoSetup for creating my application installers and I set the "SetupLogging=yes" flag to always create a setup-log file in the %TEMP% directory. This works fine for the installation procedure. Unfortunately, InnoSetup will not create such a log file when I uninstall the application. Is there a flag / possibility to force InnoSetup to also create an uninstall log file?

    Read the article

  • What am I missing with log4net - No log file created

    - by Saif Khan
    I am trying to use log4net in a VB.NET app for some unknown reason it's not creating the log file. Here is my app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="FileAppender" type="log4net.Appender.FileAppender"> <file value="c:\log-file.txt" /> <appendToFile value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="FileAppender" /> </root> </log4net> </configuration> Here is the app code Imports log4net Public Class Form1 Dim log As ILog Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click log.Error("test") End Sub Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load log4net.Config.XmlConfigurator.Configure() log = log4net.LogManager.GetLogger("TestThings") End Sub End Class "TestThings" is the name of the VS project. What am I missing?

    Read the article

  • Assigning console.log to another object (Webkit issue)

    - by Trevor Burnham
    I wanted to keep my logging statements as short as possible while preventing console from being accessed when it doesn't exist; I came up with the following solution: var _ = {}; if (console) { _.log = console.debug; } else { _.log = function() { } } To me, this seems quite elegant, and it works great in Firefox 3.6 (including preserving the line numbers that make console.debug more useful than console.log). But it doesn't work in Safari 4. [Update: Or in Chrome. So the issue seems to be a difference between Firebug and the Webkit console.] If I follow the above with console.debug('A') _.log('B'); the first statement works fine in both browsers, but the second generates a "TypeError: Type Error" in Safari. Is this just a difference between how Firebug and the Safari Web Developer Tools implement console? If so, it is VERY annoying on Apple's Webkit's part. Binding the console function to a prototype and then instantiating, rather than binding it directly to the object, doesn't help. I could, of course, just call console.debug from an anonymous function assigned to _.log, but then I'd lose my line numbers. Any other ideas?

    Read the article

  • git: 'log master..origin/master' not behaving as expected

    - by steve jaffe
    I'm trying to compare my copy of 'master' to that on the remote repository which it tracks. I thought that the following command would work, and often it seems to. However, sometimes it produces nothing and yet I know that the remote branch has many changes, which I can confirm by doing a pull. git log master..origin/master Can anyone explain this behavior and tell me what command I should be using to determine the changes between local and remote? [Another piece of data: I've had it happen that 'git log master..origin/master' produces nothing. Then I do a pull. The pull fails because I have a working copy of some file. After this, 'git log master..origin/master' does show me the differences. It seems the pull has updated some local log? If so, how could I achieve this without doing (or attempting to do) a pull?]

    Read the article

  • Windows Forms Event Log

    - by blu
    I am writing to the event log from my Windows Forms application running on Windows 7 and am getting this message in the event log: The description for Event ID X from source Application cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: Exception Details the message resource is present but the message is not found in the string/message table My logging code is: public void Log(Exception exc) { EventLog.WriteEntry( "Application", exc.ToString(), EventLogEntryType.Error, 100); } My logging on Windows Forms is usually to a DB, but in this case decided to use the event log. I usually use the event log in ASP.NET applications, but those are on XP Pro locally and Windows Server 2003 on the web boxes. Is this a Windows 7 thing or a Windows Forms thing, and what should I do to fix this? Thanks.

    Read the article

  • Creating multiple log files of different content with log4j

    - by Daniel
    Is there a way to configure log4j so that it outputs different levels of logging to different appenders? I'm trying to set up multiple log files. The main log file would catch all INFO and above messages for all classes. (In development, it would catch all DEBUG and above messages, and TRACE for specific classes.) Then, I would like to have a separate log file. That log file would catch all DEBUG messages for a specific subset of classes, and ignore all messages for any other class. Is there a way to get what I'm after? Thanks, Dan

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Ask Basic Configurator in Apache Commong Log

    - by adisembiring
    I use log4j as logger for my web application. in log4j, I can set the level log in log4j properties or log4j.xml. in log4j, we instance logger as follows: static Logger logger = Logger.getLogger(SomeClass.class); I init log4j basic configurator in a servlet file using init method. But, I usually test application using JUnit, So I init the basic configurator in setup method. after that, I test the application, and I can see the log. Because I deployed, the web in websphere. I change all of logging instance become: private Log log = LogFactory.getLog(Foo.class); I don't know how to load basic configurator using ACL. so I can't control debug level to my JUnit test. do you have any suggestion, without changing static Logger logger = Logger.getLogger(SomeClass.class); become static Logger logger = Logger.getLogger(SomeClass.class);

    Read the article

  • Recommendations of a high volume log event viewer in a Java enviroment

    - by Thorbjørn Ravn Andersen
    I am in a situation where I would like to accept a LOT of log events controlled by me - notably the logging agent I am preparing for slf4j - and then analyze them interactively. I am not as such interested in a facility that presents formatted log files, but one that can accept log events as objects and allow me to sort and display on e.g. threads and timelines etc. Chainsaw could maybe be an option but is currently not compatible with logback which I use for technical reasons. Is there any project with stand alone viewers or embedded in an IDE which would be suitable for this kind of log handling. I am aware that I am approaching what might be suitable for a profiler, so if there is a profiler projekt suitable for this kind of data acquisition and display where I can feed the event pipe, I would like to hear about it). Thanks for all feedback Update 2009-03-19: I have found that there is not a log viewer which allows me to see what I would like (a visual display of events with coordinates determined by day and time, etc), so I have decided to create a very terse XML format derived from the log4j XMLLayout adapted to be as readable as possible while still being valid XML-snippets, and then use the Microsoft LogParser to extract the information I need for postprocessing in other tools.

    Read the article

  • Spring ApplicationContextShutdownBean entries in log

    - by Eric J.
    My SpringSource dm Server log is full of lines like the following: com.springsource.server.kernel.dm.ApplicationContextShutdownBean < void com.springsource.server.kernel.dm.ApplicationContextShutdownBean.onApplicationEvent(ApplicationEvent) making it hard to spot interesting log events. How can I turn these log entries off?

    Read the article

  • Why isn't Passenger respecting my custom log format?

    - by Millisami
    I need to change the log format of my rails app. I put this file in lib directory and required it in development.rb env file. require 'hodel_3000_compliant_logger' config.logger = Hodel3000CompliantLogger.new(config.log_path) and I should get the output of the development.log file as follows: Jun 28 03:05:13 millisami-notebook rails[18243]: Memory usage: 86888 | PID: 18243 I get this exact log when I start my app with script/server (Mongrel). But when I run the app via Passenger, the format being logged is Rails' default. Why doesn't Passenger write to the log file like Mongrel does?

    Read the article

  • Log a user in to an ASP.net application using Windows Authentication without using Windows Authentic

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work? Update To those who have responded so far, I apologize for the confusion I have caused. The responses I've received indicate that you've misunderstood the question, so please allow me to clarify. I have no control over the requirement that users must perform network operations (such as SQL queries) using Active Directory accounts. I've been told several times (online and in meat-space) that this is an unusual requirement and possibly bad practice. I also have no control over the requirement that users must log in using the existing cookie-based log on application. I understand that in an ideal MS ecosystem, I would simply dis-allow anonymous access in my IIS settings and users would log in using Windows Authentication. This is not the case. The current system is that as far as IIS is concerned, the user logs in anonymously (even though they supply credentials which result in the issuance of a cookie) and we must programmatically check the cookie to see if the user has access to any restricted resources. In times past, we have simply used a single SQL account to perform all queries. My direct supervisor (who has many years of experience with this sort of thing) wants to change this. He says that if each user has his own AD account to perform SQL queries, it gives us more of a trail to follow if someone tries to do something wrong. The closest thing I've managed to come up with is using WIF to give the user a claim to a specific Active Directory account, but I still have to use impersonation because even still, the ASP.net process presents anonymous credentials to the SQL server. It boils down to this: Can I log users in with Active Directory accounts in my ASP.net application without having the users manually enter their AD credentials? (Windows Authentication)

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >