Search Results

Search found 5286 results on 212 pages for 'logs'.

Page 14/212 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Where is default location where tracelistener writes txt logs

    - by djerry
    Hey guys, i want to log some traces in my service. When i set initializeData to a location in the d: partition, i can write with no problems. When i set the initializeData to c:\, it doesn't write at all. Now i was wondering 2 things : 1) Does my service not have the rights to write to c:\ partition? 2) if i don't specify the partition, where does it write to? This is the part of my app.config which works: <add initializeData="d:\txtServiceLog.txt" type="MonitoringServerService.FaultTracer, MonitoringServerService" name="txtListener" traceOutputOptions="DateTime, Timestamp, ProcessId, Callstack"> <filter type="" /> </add> When changing to code below, i doesn't write anymore : <add initializeData="c:\txtServiceLog.txt" type="MonitoringServerService.FaultTracer, MonitoringServerService" name="txtListener" traceOutputOptions="DateTime, Timestamp, ProcessId, Callstack"> <filter type="" /> </add> And where should i look if i do this : <add initializeData="txtServiceLog.txt" type="MonitoringServerService.FaultTracer, MonitoringServerService" name="txtListener" traceOutputOptions="DateTime, Timestamp, ProcessId, Callstack"> <filter type="" /> </add> Thanks in advance.

    Read the article

  • How can I disable Hibernate-cache logs?

    - by Mulone
    Hi guys, My Grails app log is being flooded with thousands of messages like: 2010-05-21 18:54:08,261 [30462143@qtp-19943008-38] DEBUG hibernate.EhCache - key: ga_event value: 5220206380077056 This is my log4j config: // log4j configuration log4j = { // Example of changing the log pattern for the default console // appender: // appenders { console name:'stdout',layout:pattern(conversionPattern: '%c{2} %m%n') rollingFile name:'applog', file: logDirectory+"/${appName}_main.log", maxFileSize:'10MB' //'null' name:'stacktrace' file name: 'stacktrace', file: logDirectory+"/${appName}_stacktrace.log", layout: pattern(conversionPattern: '%c{2} %m%n') } error 'org.codehaus.groovy.grails.web.servlet', // controllers 'org.codehaus.groovy.grails.web.pages', // GSP 'org.codehaus.groovy.grails.web.sitemesh', // layouts 'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping 'org.codehaus.groovy.grails.web.mapping', // URL mapping 'org.codehaus.groovy.grails.commons', // core / classloading 'org.codehaus.groovy.grails.plugins', // plugins 'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration 'org.springframework', 'org.hibernate', stacktrace: "stacktrace" warn 'org.mortbay.log' root { debug 'stdout', 'applog' additivity = true } } Any idea on how to disable that log? Cheers

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Git graph with ref logs

    - by Francisco Garcia
    I am trying to improve my custom git log format string. I have almost everything I want except the ref names. I can already get a log similar to what I want: > git log --all --source --pretty=oneline --graph * b7c7ad3855b54e94ad7ac03f2d2e5b96d6e5ac1d refs/heads/b1 na | * 695e1482622a79230fa1d83afb8d70e86847334a refs/heads/master Merge branch 'b1' | |\ | |/ |/| * | ec21f370f82096c0208f43b390da234d92e8c74a refs/heads/b1 beta * | c6bc1f55ab3b1bd568493a5de4298dfcb4f66d8d refs/heads/b1 alfa * | 762dd868ae87753afc1cbf9803744c76f9a9e121 refs/heads/b1 tango | * 57fb27bff06ee9bb569f93ba815e9dcd69521c13 refs/heads/master little last post commit |/ | * 8d613d09b43152a7263b6e02d47ec8a4304f54be refs/heads/b3 the other commit | * e1f32b7cb86633351df06e37c2c58ef3f9fafc40 refs/heads/b3 something |/ | * 01b5c6728cf25dd576733211ce75dd3ecc29c7ba refs/heads/b2 this time a I am fighting to get a customized output with my own format string like this: > git log --pretty=format:'%h - %gD %s' --source -g b7c7ad3 - HEAD@{0} na ec21f37 - HEAD@{1} beta 01b5c67 - HEAD@{2} this time a 01b5c67 - HEAD@{3} this time a 695e148 - HEAD@{4} Merge branch 'b1' 57fb27b - HEAD@{5} little last post commit My main problem is that I cannot get the ref names I want. I assume it is one of the %g? format strings, but none of them seem to give me the full ref name. Another problem is that the %g? format strings are empty unless I walk the reflogs (-g). However git refuses to combine --graph with -g How can reproduce the first sample with a format string which I can further customize?

    Read the article

  • how to write a script that logs into an application and checks a page

    - by josh
    Is it possible to write a script that will login to an application using uname/pwd? the username/password are not passed in through POST (they dont come in the URL) Basic steps I am looking for are: Visit url enter uname/pwd click a button click a link get the raw html to make sure it does not have 500 error Is that possible to do in any language? Please point me to some examples as well

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Seam log4j credential logs

    - by Marc
    In Seam, using log4, I would like to have my info, warn and error always logging the logged in user (if so) name to be logged with whatever the log message is. Being a consistant thing I do not want to have to grab the logged-in user name, and prefix the message. so I attempted to populate the log4j NDC to have it as a field of the log message. Pushing the user name on successful login: NDC.push(credentials.getUsername()); Which works, but the NDC is managed per thread, so once another thread processes a request from the same logged in user, the trace of this user name is lost. I was thinking that there should be a common pattern to accomplish this simple task which is attaching each log message to the logged user, using the NDC or not, to know exactly what user triggered what action. Anyone knows the appropriate way to accomplish this?

    Read the article

  • What these numbers mean in Kannel SMSC logs?

    - by Hashmi
    What does these numbers represent ? What does their mean ? 2013-06-27 10:39:42 [9446] [6] DEBUG: SMPP PDU 0x7f8364000a50 dump: 2013-06-27 10:39:42 [9446] [6] DEBUG: type_name: enquire_link 2013-06-27 10:39:42 [9446] [6] DEBUG: command_id: 21 = 0x00000015 2013-06-27 10:39:42 [9446] [6] DEBUG: command_id: 21 = 0x00000015 2013-06-27 10:39:42 [9446] [6] DEBUG: command_status: 0 = 0x00000000 2013-06-27 10:39:42 [9446] [7] DEBUG: SMPP[mvoip]: Got PDU: 2013-06-27 10:39:42 [9446] [6] DEBUG: sequence_number: 519338176 = 0x1ef478c0 2013-06-27 10:39:42 [9446] [6] DEBUG: SMPP PDU dump ends.

    Read the article

  • Saving form values to database after a user logs in

    - by redfalcon
    Hi. We have a form with ratings to submit for a certain restaurant. After the user has entered some values and wants to submit them, we check whether the user is logged in or not. If not, we display a login form and let the user put in his account data and redirect him to the restaurant he wanted to submit a rating for. The problem is, that after he successfully logged in himself, the submitted values are not saved to the database (which works fine if the user is already logged in). So I wondered if it is possible, to somehow save the data although the user is not logged in. I thought of maybe saving the filled values in a variable and have then automatically re-entered after we redirected the user. But I guess this wont work because we use before_filter :login_required, :only => [ :create ] So we couldnt even access the filled in values, since we display the login-form before the method has processed the values in the form, right? Any idea how we can make rails to save the values or at least have them automatically re-entered to the form? Thanks!

    Read the article

  • Cordova polluted logs with persistent.js add()

    - by slaver113
    I am using the latest phonegap/cordova version 2.1. and my log in Eclipse logcat get polluted with code when i do var allItems = Item.all(); allItems.list(null, function (results) { results.forEach(function (r) { console.log(r.id+ " " + r.lat + " " + r.long + " " + r.state); }); }); I get a output like (for 100s of lines) 10-29 10:56:13.270: I/Web Console(5961): } function (value) { 10-29 10:56:13.270: I/Web Console(5961): if (value === undefined) { 10-29 10:56:13.270: I/Web Console(5961): return getterCallback(); 10-29 10:56:13.270: I/Web Console(5961): } else { 10-29 10:56:13.270: I/Web Console(5961): setterCallback(value); 10-29 10:56:13.270: I/Web Console(5961): return scope; 10-29 10:56:13.270: I/Web Console(5961): } 10-29 10:56:13.270: I/Web Console(5961): } function (value) { 10-29 10:56:13.270: I/Web Console(5961): if (value === undefined) { 10-29 10:56:13.270: I/Web Console(5961): return getterCallback(); 10-29 10:56:13.270: I/Web Console(5961): } else { 10-29 10:56:13.270: I/Web Console(5961): setterCallback(value); 10-29 10:56:13.270: I/Web Console(5961): return scope; 10-29 10:56:13.270: I/Web Console(5961): } 10-29 10:56:13.270: I/Web Console(5961): } function (value) { 10-29 10:56:13.270: I/Web Console(5961): if (value === undefined) { 10-29 10:56:13.270: I/Web Console(5961): return getterCallback(); 10-29 10:56:13.270: I/Web Console(5961): } else { 10-29 10:56:13.270: I/Web Console(5961): setterCallback(value); 10-29 10:56:13.270: I/Web Console(5961): return scope; 10-29 10:56:13.270: I/Web Console(5961): } 10-29 10:56:13.270: I/Web Console(5961): } at :1149822901

    Read the article

  • BlockingQueue decorator that logs removed objects

    - by scompt.com
    I have a BlockingQueue that's being used in a producer-consumer situation. I would like to decorate this queue so that every object that's taken from it is logged. I know what the straightforward implementation would look like: simply implement BlockingQueue and accept a BlockingQueue in the constructor to which all of the methods would delegate. Is there another way that I'm missing? A library perhaps? Something with a callback interface?

    Read the article

  • check for several conditions when a user logs in

    - by paul
    I would like to accomplish the following: If a username or password field is null, notify the user. If user name already exists, do not insert into the database and notify user to create a different name. if the username is unique and password is not null, return the username to the user. As of now it always returns "Please enter a different user name." I believe the issue has to do with the database query but I am not sure. If anyone can have a look and see if I am making an error, I greatly appreciate it, thanks. if ($userName or $userPassword = null) { echo "Please enter a user name and password or return to the homepage."; } elseif (mysql_num_rows(mysql_query("SELECT count(userName) FROM logininfo WHERE userName = '$userName'")) ==1) { echo "Please enter a different user name."; } elseif ($userName and $userPassword != null) { echo "Your login name is: $userName"; }

    Read the article

  • Jetty: How to write to access logs

    - by mdemmitt
    Hi all, In my Java servlet code, I want to be able to programatically write to the jetty access log. I am aware that jetty will automatically log every incoming HTTP request to the access log. However, my servlet needs to occasionally append it's own line to the access log. Has anyone here done something similar? Thanks!

    Read the article

  • Framework for Monitoring logs of an application

    - by whyjava
    hello I am working on writing an web application which will be monitoring an java process which moves files from one location to another. The monitoring application needs to do following things Monitor log files View the content of moved files Is there any opensource framework which provides monitoring capabilities over logging. I am building this application in java. Thanks

    Read the article

  • Why don't cfn-init logs get sent by rsyslog?

    - by Jon M
    I just signed up for Papertrail to aggregate logs from some AWS instances I'm setting up with CloudFormation::Init. I've followed the instructions and added *.* @logs.papertrailapp.com to the end of '/etc/rsyslog.conf'. Some logs are showing up on Papertrail, but notably the contents of '/var/log/cfn-init.log' never get there, and those are the ones I'm interested in right now. Have I set up rsyslog incorrectly? Or do the CloudFormation::Init scripts just not use syslog to write log information?

    Read the article

  • Why is filesystem preferred for logs instead of RDBMS?

    - by Yasir
    Question should be clear from its title. For example Apache saves its access and error logs in files instead of RDBMS no matter on how large or small scale it is being utilized. For RDMS we just have to write SQL queries and it will do the work while for files we must decide a particular format and then write regex or may be parsers to manipulate them. And those might even fail in particular circumstances if great care was not paid. Yet everyone seems to prefer filesystem for maintaining the logs. I am not biased against any of these methods but I would like to know why it is practiced like this. Is it speed or maintainability or something else?

    Read the article

  • Rotating WebLogic Server logs to avoid large files using WLST.

    - by adejuanc
    By default, when WebLogic Server instances are started in development mode, the server automatically renames (rotates) its local server log file as SERVER_NAME.log.n.  For the remainder of the server session, log messages accumulate in SERVER_NAME.log until the file grows to a size of 500 kilobytes.Each time the server log file reaches this size, the server renames the log file and creates a new SERVER_NAME.log to store new messages. By default, the rotated log files are numbered in order of creation filenamennnnn, where filename is the name configured for the log file. You can configure a server instance to include a time and date stamp in the file name of rotated log files; for example, server-name-%yyyy%-%mm%-%dd%-%hh%-%mm%.log.By default, when server instances are started in production mode, the server rotates its server log file whenever the file grows to 5000 kilobytes in size. It does not rotate the local server log file when the server is started. For more information about changing the mode in which a server starts, see Change to production mode in the Administration Console Online Help.You can change these default settings for log file rotation. For example, you can change the file size at which the server rotates the log file or you can configure a server to rotate log files based on a time interval. You can also specify the maximum number of rotated files that can accumulate. After the number of log files reaches this number, subsequent file rotations delete the oldest log file and create a new log file with the latest suffix.  Note: WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth. To Rotate via WLST : #invoke WLSTC:\>java weblogic.WLST#connect WLST to an Administration Serverawls:/offline> connect('username','password')#navigate to the ServerRuntime MBean hierarchywls:/mydomain/serverConfig> serverRuntime()wls:/mydomain/serverRuntime>ls()#navigate to the server LogRuntimeMBeanwls:/mydomain/serverRuntime> cd('LogRuntime/myserver')wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()-r-- Name myserver-r-- Type LogRuntime-r-x forceLogRotation java.lang.Void :#force the immediate rotation of the server log filewls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()wls:/mydomain/serverRuntime/LogRuntime/myserver> The server immediately rotates the file and prints the following message: <Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170017> <The log file C:\diablodomain\servers\myserver\logs\myserver.log will be rotated. Reopen the log file if tailing has stopped. This can happen on some platforms like Windows.><Mar 2, 2012 3:23:01 PM EST> <Info> <Log Management> <BEA-170018> <The log file has been rotated to C:\diablodomain\servers\myserver\logs\myserver.log00001. Log messages will continue to be logged in C:\diablodomain\servers\myserver\logs\myserver.log.> To specify the Location of the archived Log Files The following command specifies the directory location for the archived log files using the -Dweblogic.log.LogFileRotationDir Java startup option: java -Dweblogic.log.LogFileRotationDir=c:\foo-Dweblogic.management.username=installadministrator-Dweblogic.management.password=installadministrator weblogic.Server For more information read the following documentation ; Using the WebLogic Scripting Tool http://download.oracle.com/docs/cd/E13222_01/wls/docs103/config_scripting/using_WLST.html Configuring WebLogic Logging Services http://download.oracle.com/docs/cd/E12840_01/wls/docs103/logging/config_logs.html

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • A lot of 408 errors in apache logs - how to prevent them?

    - by Robert Grezan
    I see a lot of 408 errors in my apache2 logs. I increased RequestReadTimeout and KeepAliveTimout but errors are still there. The errors look like this: xx.xx.xx.xx - - [05/Dec/2012:19:33:56 +0100] "-" 408 4561 "-" "-" xx.xx.xx.xx - - [05/Dec/2012:19:33:56 +0100] "-" 408 4561 "-" "-" I heard that these errors are related to Chrome optimization and some users did reported our site returning 408 internal error. It is interesting that we get two 408 error from same IP in sequence then it that IP start working.

    Read the article

  • How to receive alerts when you centralize your SQL Server Event Logs.

    Learn how you can get alerts when you centralize the Event log. This is part 2 of the previous article "How to centralize your SQL Server Event Logs." What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • How to handle CNAME host redirect to virtual directory?

    - by esac
    I have an internal website and virtual directory http://server2012/logs. I created a CNAME on my DNS server as LOGS - server2012. I would like to set it up so that http://LOGS redirects to http://server2012/logs. Ideally, I would still want it so that all pages appear in the browser as being off from the LOGS URL. So http://LOGS/network.html?site=32 is what is displayed in the browser, but it is really being served from http://server2012/logs/network.html?site=32. I've looked at URL rewrite, but can't seem to get to work.

    Read the article

  • rsyslog server - Can you split up and organize logs?

    - by Jakobud
    I recently setup one of our servers as an rsyslog server. I now have our firewall setup to log everything to that rsyslog server. But there doesn't seem to be an organization of the logs. All the firewall logs are just being dumped into the /var/log/messages on the rsyslog server. I guess I was maybe expecting them to at least be in a machine specific log file or directory. How can I organize the incoming logging? If I setup 20 servers to all log everything to a central rsyslog server, I really don't want everything being dumped into one big file or a few files. How can I setup rsyslog to tell it where to log what? Like if all the logs for a specific server were in it's own directory/file, etc... Is this possible?

    Read the article

  • What is the best way to handle the multitude of different logs created all around the place?

    - by Low Kian Seong
    I run a few applications which creates their own logs. Then I run cron scripts on the same server to do importing of data for my app. When these cron errors out, the default is it sends emails to the user that runs the cron job. There are just too many places that I need to check the logs and mails for stuff that might have potentially went wrong. My question is, what is the best way to do this or even better is like a log parser application which will go through all the system logs when something really goes wrong instead of me having to go through it daily?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >