Search Results

Search found 4489 results on 180 pages for 'logging'.

Page 36/180 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Oracle: show parameters on error

    - by llappall
    When Oracle logs a parameterized SQL query failing, it shows "?" in place of the parameters, i.e. the query before replacing parameters. For example, "SELECT * FROM table where col like '?'" SQL state [99999]; error code [29902]; ORA-29902: error in executing ODCIIndexStart() routine ORA-20000: Oracle Text error: DRG-50901: text query parser syntax error on line 1, column 48 Is there a way to change logging so it shows the parameter values? The information above is absolutely useless unless I can see what the actual parsing problem was. In general, is there a way to set logs in Oracle to show parameters in parameterized query errors?

    Read the article

  • IIS 6 session timing out a lot quicker than expected

    - by Echiban
    I am working with an web application that has its sessions timing out a lot quicker than expected. We expected a timeout of 15 minutes but it's timing out at 3-4 minutes. Info about environment: IIS6 classic ASP / COM+ app timeout OK on current PROD, much quicker in dev / QA environments We already disabled app pool recycling, and even put IIS in isolation mode - no effect HTTP err log doesn't display any lines when session times out We've done a close comparison of PROD and DEV / QA environments, and given we use virtual machines on all of them, settings should be preserved. I tried to find IIS blog notes from David Wang but many of them now have HTTP 404 errors, and I don't know what else to do. Please help! At the very least, is there a way to get IIS to log every time a session expires? At the very least some means of logging / debugging IIS would be useful. Thanks in advance.

    Read the article

  • Instrumenting Database Access

    - by Whisk
    Jeff mentioned in one of the podcasts that one of the things he always does is put in instrumentation for database calls, so that he can tell what queries are causing slowness etc. This is something I've measured in the past using SQL Profiler, but I'm interested in what strategies other people have used to include this as part of the application. Is it simply a case of including a timer across each database call and logging the result, or is there a 'neater' way of doing it? Maybe there's a framework that does this for you already, or is there a flag I could enable in e.g. Linq-to-SQL that would provide similar functionality. I mainly use c# but would also be interested in seeing methods from different languages, and I'd be more interested in a 'code' way of doing this over a db platform method like SQL Profiler.

    Read the article

  • Wanted red color for text in the error message in the log file

    - by swati
    Hello Everyone, I have a question but not sure it is possible or not. I am using apache logger for my logging which creates a log file which works fine with no issues.my question is when i open the log file i get the different messages like messages with INFO,DEBGU,ERROR etc. But i wanted to see the error message in red color in text in my logger file.. is it possible? So in that way if some one opens my log file if some thing is there in red they can clearly can guess that it is an error message.. Is it possible.. I would really appreciate if some one can respond to me . Thanks, Swati

    Read the article

  • In Log4Net XML configuration is Priority the same thing as Level?

    - by Michael Levy
    I inherited some code that uses the priority element under the root in its xml configuraiton. This is just like the example at http://iserialized.com/log4net-for-noobs/ which shows: <root> <priority value="ALL" /> <appender-ref ref="LogFileAppender" /> <appender-ref ref="ConsoleAppender"/> </root> However, the log4net configuration examples at http://logging.apache.org/log4net/release/manual/configuration.html always show it using the level element: <root> <level value="DEBUG" /> <appender-ref ref="A1" /> </root> In this type of configuration is <priority> the same as <level> ? Can someone point me to somewhere in the docs where this is explained?

    Read the article

  • Jetty with a custom JUL logger

    - by Alan Williamson
    I feel this should be easier, or I am missing something obvious. I am trying to use our custom JUL logging library with Jetty. No matter where I put the JAR file for the custom logger, it is not found. I have tried the usual suspects; /lib/, /lib/ext/, /WEB-INF/lib/ and even manually added it to the classpath. 2011-06-29 15:27:34.518::INFO: Started [email protected]:8080 Can't load log handler "net.aw20.logshot.client.LogShotHandler" java.lang.ClassNotFoundException: net.aw20.logshot.client.LogShotHandler java.lang.ClassNotFoundException: net.aw20.logshot.client.LogShotHandler at java.net.URLClassLoader$1.run(URLClassLoader.java:217) I am starting up Jetty using "-jar start.jar" technique. Searching around, I have spotted a couple of threads that talk about this problem, but with no resolution. Or if there was, they didn't answer with their solution. Can anyone help on this front? Thanks

    Read the article

  • How do I change the default format of log messages in python app engine?

    - by dazed-n-confused
    I would like to log the module and classname by default in log messages from my request handlers. The usual way to do this seems to be to set a custom format string by calling logging.basicConfig, but this can only be called once and has already been called by the time my code runs. Another method is to create a new log Handler which can be passed a new log Formatter, but this doesn't seem right as I want to use the existing log handler that App Engine has installed. What is the right way to have extra information added to all log messages in python App Engine, but otherwise use the existing log format and sink?

    Read the article

  • Seam log4j credential logs

    - by Marc
    In Seam, using log4, I would like to have my info, warn and error always logging the logged in user (if so) name to be logged with whatever the log message is. Being a consistant thing I do not want to have to grab the logged-in user name, and prefix the message. so I attempted to populate the log4j NDC to have it as a field of the log message. Pushing the user name on successful login: NDC.push(credentials.getUsername()); Which works, but the NDC is managed per thread, so once another thread processes a request from the same logged in user, the trace of this user name is lost. I was thinking that there should be a common pattern to accomplish this simple task which is attaching each log message to the logged user, using the NDC or not, to know exactly what user triggered what action. Anyone knows the appropriate way to accomplish this?

    Read the article

  • Monitoring .NET ASP.NET Applications

    - by James Hollingworth
    I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are: Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc) Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions. Performance: Request times, memory usage, cpu, etc. whatever stats I can get I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?

    Read the article

  • What to log when an exception occurs?

    - by Rune
    public void EatDinner(string appetizer, string mainCourse, string dessert) { try { // Code } catch (Exception ex) { Logger.Log("Error in EatDinner", ex); return; } } When an exception occurs in a specific method, what should I be logging? I see a lot of the above in the code I work with. In these cases, I always have to talk to the person who experienced the error to find out what they were doing, step through the code, and try to reproduce the error. Is there any best practices or ways I can minimize all this extra work? Should I log the parameters in each method like this? Logger.Log("Params: " + appetizer + "," + mainCourse + "," + dessert, ex); Is there a better way to log the current environment? If I do it this way, will I need to write out all this stuff for each method I have in my application? Are there any best practices concerning scenarios like this?

    Read the article

  • Can anyone recommend a .Net XML Serialization library?

    - by James
    Can anyone recommend a .Net XML Serialization library (ideally open source). I am looking for a robust XML serialization library that I can throw any object at, which will produce a human readable XML representation of the public properties for logging purposes. I never need to be able to deserialize. XmlSerializer's requirement of an object having a parameter constructor is too restrictive for what I want. DataContractSerializer does not give enough control over the output (which is not particularly human-readable). Any recommendations appreciated! Thanks

    Read the article

  • Is there a Java 1.5 varargs API for slf4j yet?

    - by Josh
    I want to get rid of this lot... public void info(String msg); public void info(String format, Object arg); public void info(String format, Object arg1, Object arg2); public void info(String format, Object[] argArray); ...and replace it with this one... public void info(String format, Object ... args); ...so that my logging syntax doesn't have to change depending on the number of arguments I want to log. There seems to be lots of discussion and work around it, but where is it? Or should I wrap the wrapper that is slf4j?

    Read the article

  • newline in Rackspace Cron Job email

    - by senloe
    I'm running backups against multiple databases hosted at Rackspace. This is working fine. The problem I'm running into is with the results email. I'm using Response.Write to write a message to the web page which is used for logging and is also consumed by the results mail sent out by the job. The problem is I can't seem to get newlines to appear between log messages. The logfile stored on the server is correct, but only the first newline shows up in the email. The mail is in Plain Text format so I tried using "\n" and System.Environment.Newline and neither work. I also tried using <br/> with no luck. Does anybody have any ideas?

    Read the article

  • Why would Django fcgi just die? How can I find out?

    - by Joe
    I'm running Django on Linux using fcgi and Lighttpd. Every now and again (about once a day) the server just dies. I'm using the latest stable release of Django, Python and Lighttpd. The only thing I can think of is that my program is opening a lot of files and executing a lot of external processes, but I'm fairly sure that side of things is watertight. Looking at the error and access logs, there's nothing exceptional happening (i.e. load isn't above normal). On those occasions where I have had exceptions from Python, these have shown up in the error.log, but when this crash happens I get nothing. Is there any way of finding out why the process died? Short of putting logging statements on every single line? Obviously I can't reproduce this so I don't know exactly where to look.

    Read the article

  • how to generate the instance for logger?

    - by Elakkiya
    Here is my code package com.my; import org.apache.log4j.spi.LoggerFactory; import java.io.*; import java.util.logging.*; public class Log { public static void main(String[] args) { try{ FileHandler hand = new FileHandler("vk.log"); Logger log = Logger.getLogger("log_file"); log.addHandler(hand); log.warning("Doing carefully!"); log.info("Doing something ..."); log.severe("Doing strictily "); System.out.println(log.getName()); } catch(IOException e){ System.out.println(e) } } }

    Read the article

  • Boost.Log - Multiple processes to one log file?

    - by Kevin
    Reading through the doc for Boost.Log, it explains how to "fan out" into multiple files/sinks pretty well from one application, and how to get multiple threads working together to log to one place, but is there any documentation on how to get multiple processes logging to a single log file? What I imagine is that every process would log to its own "private" log file, but in addition, any messages above a certain severity would also go to a "common" log file. Is this possible with Boost.Log? Is there some configuration of the sinks that makes this easy? I understand that I will likely have the same "timestamp out of order" problem described in the FAQ here, but that's OK, as long as the timestamps are correct I can work with that. This is all on one machine, so no remote filesystem problems either.

    Read the article

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • Log4net duplicate logging entires

    - by user210713
    I recently switched out log4net logging from using config files to being set up programmatically. This has resulted in the nhiberate entries getting repeated 2 or sometimes 3 times. Here's the code. It uses a string which looks something like this "logger1|debug,logger2|info" private void SetupLog4netLoggers() { IAppender appender = GetAppender(); SetupRootLogger(appender); foreach (string logger in Loggers) { CommaStringList parts = new CommaStringList(logger, '|'); if (parts.Count != 2) continue; AddLogger(parts[0], parts[1], appender); } log.Debug("Log4net has been setup"); } private IAppender GetAppender() { RollingFileAppender appender = new RollingFileAppender(); appender.File = LogFile; appender.AppendToFile = true; appender.MaximumFileSize = MaximumFileSize; appender.MaxSizeRollBackups = MaximumBackups; PatternLayout layout = new PatternLayout(PATTERN); layout.ActivateOptions(); appender.Layout = layout; appender.ActivateOptions(); return appender; } private void SetupRootLogger(IAppender appender) { Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); hierarchy.Root.RemoveAllAppenders(); hierarchy.Root.AddAppender(appender); hierarchy.Root.Level = GetLevel(RootLevel); hierarchy.Configured = true; log.Debug("Root logger setup, level[" + RootLevel + "]"); } private void AddLogger(string name, string level, IAppender appender) { Logger logger = LogManager.GetRepository().GetLogger(name)as Logger; if (logger == null) return; logger.Level = GetLevel(level); logger.Additivity = false; logger.RemoveAllAppenders(); logger.AddAppender(appender); log.Debug("logger[" + name + "] added, level[" + level + "]"); } And here's an example of what we see in our logs... 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,781 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - running ISession.Dispose() 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.Impl.SessionImpl - closing session 2010-05-06 15:50:39,796 [1] DEBUG NHibernate.AdoNet.AbstractBatcher - running BatcherImpl.Dispose(true) Any hints welcome.

    Read the article

  • Jboss logging issue - pl check this

    - by balaji
    I’m Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web Please help..

    Read the article

  • Logging in to a website cURL!

    - by uknowho_freeman
    I am using cURL for the first time. I need to login to a site. I have problem with setting cookie file and to retrive, so that i can acces that page not just one time, but several times. I found the code on the web, for logging in to a site and Scrap a page for some detailed info, cause to get that page it takes to much time. so i just want to know if it is OK! the code belove(it is just for login in the code for Scraping its not ready) <?php curl_login('http://mywantedsite.com/login.php','user=******&pass=******','','off'); echo curl_grab_page('http://mywantedsite.com/somepage.php','','off'); function curl_login($url,$data,$proxy,$proxystatus){ $fp = fopen("cookie.txt", "w"); fclose($fp); $login = curl_init(); curl_setopt($login, CURLOPT_COOKIEJAR, "cookie.txt"); curl_setopt($login, CURLOPT_COOKIEFILE, "cookie.txt"); curl_setopt($login, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($login, CURLOPT_TIMEOUT, 40); curl_setopt($login, CURLOPT_RETURNTRANSFER, TRUE); if ($proxystatus == 'on') { curl_setopt($login, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($login, CURLOPT_HTTPPROXYTUNNEL, TRUE); curl_setopt($login, CURLOPT_PROXY, $proxy); } curl_setopt($login, CURLOPT_URL, $url); curl_setopt($login, CURLOPT_HEADER, TRUE); curl_setopt($login, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']); curl_setopt($login, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($login, CURLOPT_POST, TRUE); curl_setopt($login, CURLOPT_POSTFIELDS, $data); ob_start(); // prevent any output return curl_exec ($login); // execute the curl command ob_end_clean(); // stop preventing output curl_close ($login); unset($login); } function curl_grab_page($site,$proxy,$proxystatus){ $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); if ($proxystatus == 'on') { curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, TRUE); curl_setopt($ch, CURLOPT_PROXY, $proxy); } curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt"); curl_setopt($ch, CURLOPT_URL, $site); ob_start(); // prevent any output return curl_exec ($ch); // execute the curl command ob_end_clean(); // stop preventing output curl_close ($ch); }

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • How to know about MySQL 'refused connections'

    - by celalo
    Hello, I am using MONyog to montitor my two mysql servers. I get alert emails from MONyog when something goes wrong. There is an error I could not find out why. It says: Connection History: Percentage of refused connections) - 66.67% the percentage is not important, this is just about having refused connections. I get this email every half an hour. So this is like a constant situation. This must be my mistake, because I just set up those servers and there is no chance somebody else could be interfering the servers. MONyog advices me: Try to isolate users/applications that are using an incorrect password or trying to connect from unauthorized hosts. A client will be disallowed to connect if it takes more than connect_timeout seconds to connect. Set the value of log_warnings system variable to 2. This will force the MySQL server to log further information about the error. I added log_warnings=2 to my.cnf and I enabled logging like this: [mysqld_safe] . . log_warnings=2 log-error = /var/log/mysql/error.log . . . . [mysqld_safe] . log-error=/var/log/mysqld.log . . I cannot see any warnings at /var/log/mysql/error.log I can see some warnings at /var/log/mysqld.log but they are about something else. In sum, my question is how can I detect refused connections? Please let me know if any more info is required. Thanks in advance.

    Read the article

  • Assigning console.log to another object (Webkit issue)

    - by Trevor Burnham
    I wanted to keep my logging statements as short as possible while preventing console from being accessed when it doesn't exist; I came up with the following solution: var _ = {}; if (console) { _.log = console.debug; } else { _.log = function() { } } To me, this seems quite elegant, and it works great in Firefox 3.6 (including preserving the line numbers that make console.debug more useful than console.log). But it doesn't work in Safari 4. [Update: Or in Chrome. So the issue seems to be a difference between Firebug and the Webkit console.] If I follow the above with console.debug('A') _.log('B'); the first statement works fine in both browsers, but the second generates a "TypeError: Type Error" in Safari. Is this just a difference between how Firebug and the Safari Web Developer Tools implement console? If so, it is VERY annoying on Apple's Webkit's part. Binding the console function to a prototype and then instantiating, rather than binding it directly to the object, doesn't help. I could, of course, just call console.debug from an anonymous function assigned to _.log, but then I'd lose my line numbers. Any other ideas?

    Read the article

  • log4j vs. System.out.println - logger advantages?

    - by wishi_
    Hi! I'm newly using log4j in a project. A fellow programmer told me that using System.out.println is considered bas style and that log4j is something like standard for logging matters nowadays. We do lots of JUnit testing - System.out stuff turns out to be harder to test. Therefore I began utilizing log4j for a Console controller class, that's just handling commandline parameters. // some logger config org.apache.log4j.BasicConfigurator.configure(); Logger logger = LoggerFactory.getLogger(Console.class); Category cat = Category.getRoot(); Seems to work: logger.debug("String"); Produces: 1 [main] DEBUG project.prototype.controller.Console - String I got two questions regarding this: From my basic understanding using this logger should provide me comfortable options to write a logfile with timestamps - instead of spamming the console - if debug mode is enabled at the logger? Why is System.out.println harder to test? I searched stackoverflow and found a testing recipe. So I wonder what kind of advantage I really get by using log4j.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >