Search Results

Search found 5019 results on 201 pages for 'jakarta commons logging'.

Page 49/201 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Where should I keep my log files?

    - by ripper234
    We keep most of our logs in a dedicated database table. We have written custom appenders for log4j and log4net, have a fixed log schema with lots of handy columns, and are quite happy with it. Is that the "best practice" (for sites smaller in scale than Facebook, where a simple DB table just won't scale)?

    Read the article

  • How to log messages to a log file in a specific path from a bash script

    - by Erik
    How do you log messages to a log file in a specific path from a bash script? A naive implementation would be commands like: echo My message >>/my/custom/path/to/my_script.log But this probably has many disadvantages (no log rotation for example). I could use the 'logger' command, but it does not support logs in custom paths as far as I know and is not easy to configure if you have lots of bash scripts that could use a custom log file. In a scripting language like Ruby all this is quite easy: https://github.com/rudionrails/yell/wiki/101-the-datefile-adapter I could also make my own logger command based on this ruby library and call it from my bash scripts, but I guess there is already a well known solution that provides similar behavior for shell scripts?

    Read the article

  • What caused the rails application crash?

    - by so1o
    I'm sure someone can explain this. we have an application that has been in production for an year. recently we saw an increase in number of support requests for people having difficulty signing into the system. after scratching our head because we couldn't recreate the problem in development, we decided we'll switch on debug logger in production for a month. that was june 5th. application worked fine with the above change and we were waiting. then yesterday we noticed that the log files were getting huge so we made another change in production config.logger = Logger.new("#{RAILS_ROOT}/log/production.log", 50, 1048576) after this change, the application started crashing while processing a particular file. this particular line of code was RAILS_DEFAULT_LOGGER.info "Payment Information Request: ", request.inspect as you can see there was a comma instead of a plus sign. this piece of code was introduced in Mar. the question is this: why did the application fail now? if changing the debug level caused the application to process this line of code it should have started failing on june 5th! why today. please someone help us. Are we missing the obvious here? if you dont have an answer, at least let us know we aren't the only one that are bonkers.

    Read the article

  • Know of any Java garbage collection log analysis tools?

    - by braveterry
    I'm looking for a tool or a script that will take the console log from my web app, parse out the garbage collection information and display it in a meaningful way. I'm starting up on a Sun Java 1.4.2 JVM with the following flags: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails The log output looks like this: 54.736: [Full GC 54.737: [Tenured: 172798K->18092K(174784K), 2.3792658 secs] 257598K->18092K(259584K), [Perm : 20476K->20476K(20480K)], 2.4715398 secs] Making sense of a few hundred of these kinds of log entries would be much easier if I had a tool that would visually graph garbage collection trends.

    Read the article

  • Where can I find project repositories with continuous testing?

    - by Jenny Smith
    I am interested in studying some test logs from different projects, in order to build and test an application for school. I need to analyze the parts of the code which are tested, the bugs which appeared in those parts and eventually how they were resolved. But for this I need some repositories from different (open source) projects. Can someone please help me with ideas or links or any kind of test logs which might be useful? I really need some resources, so any help is appreciated.

    Read the article

  • Apple script Editor, write message to the "Result" window

    - by Patrick
    I am using the Mac OS X Apple Script Editor and (while debugging) instead of writing a lot of display dialog statements, I'd like to write the results of some calculation in the window below, called "Result" (I have the German UI here, so the translation is a guess). So is there a write/print statement that I can use for putting messages in the "standard out" window? I am not asking to put the messages in a logfile on the file system, it is purely temporary.

    Read the article

  • What is the proper way to use a Logger in a Serializable Java class?

    - by Tim Visher
    I have the following (doctored) class in a system I'm working on and Findbugs is generating a SE_BAD_FIELD warning and I'm trying to understand why it would say that before I fix it in the way that I thought I would. The reason I'm confused is because the description would seem to indicate that I had used no other non-serializable instance fields in the class but bar.model.Foo is also not serializable and used in the exact same way (as far as I can tell) but Findbugs generates no warning for it. import bar.model.Foo; import java.io.File; import java.io.Serializable; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Demo implements Serializable { private final Logger logger = LoggerFactory.getLogger(this.getClass()); private final File file; private final List<Foo> originalFoos; private Integer count; private int primitive = 0; public Demo() { for (Foo foo : originalFoos) { this.logger.debug(...); } } ... } My initial blush at a solution is to get a logger reference from the factory right as I use it: public DispositionFile() { Logger logger = LoggerFactory.getLogger(this.getClass()); for (Foo foo : originalFoos) { this.logger.debug(...); } } That doesn't seem particularly efficient, though. Thoughts?

    Read the article

  • How do I see the whole HTTP request in Rails

    - by akafazov
    Hi, I have a Rails application but after some time of development/debugging I realized that it would be very helpful to be able to see the whole HTTP request in the logfiles - log/development.log, not just the parameters. I also want to have a separate logfile based on user, not session. Any ideas will be appreciated! Angel

    Read the article

  • logback - no end of line delimiter

    - by binary_runner
    I'm using logback 0.9.21 . Unfortunately it prints all messages to single line, there is no end of line character, even wrong one. I've got the pattern set right AFAIK: <pattern>%d{HH:mm:ss.SSS} %-5level %class (%thread) [%logger{36}] -- %msg%n</pattern> What's the catch?

    Read the article

  • Getting the download count of a specific S3 object

    - by phidah
    I've got a number of S3 objects that are available to my customers. Since I'd like to bill my customers by usage, I wondered if there is any smart kind of way to get the number of times a given file has been downloaded? Alternatively, I suppose I could parse the log files provided by S3, but with 10m+ fetches per customer this might be bit of a task. Any ideas?

    Read the article

  • How to read log4j output to a web page?

    - by Ran
    I have a web page, used for admin purposes, which runs a task (image fetching from a remote site). In order to be able to debug the task using the browser only, no ssh etc, I'd like to be able to read all log output from the executing thread and spit it out to the web page. The task boils down to: Changing log level for current thread at the beginning of the call and restore when the call is done. Reading all log output by current thread and storing it in a string. So in pseudocode my execute() method would look like this: (I'm using struts2) public String execute() throws Exception { turnLoggingLevelToDebugOnlyForThisThread() ... do stuff... restoreLoggingLevelForThisThread() String logs = readAllLogsByThisThread(); } Can this be done with log4j? I'm using tomcat, struts2, log4j and slf4j.

    Read the article

  • Is extending a singleton class wrong?

    - by Anwar Shaikh
    I am creating a logger for an application. I am using a third party logger library. In which logger is implemented as singleton. I extended that logger class because I want to add some more static functions. In these static functions I internally use the instance (which is single) of Logger(which i inherited). I neither creates instance of MyLogger nor re-implemented the getInstance() method of super class. But I am still getting warnings like destructor of MyLogger can not be created as parent class (Loggger) destructor is not accessible. I want to know, I am I doing something wrong? Inheriting the singleton is wrong or should be avoided??

    Read the article

  • Why nginx doesn't have log for mobile request which is developed under Titanium Appcelerator?

    - by Vicheanak
    I have sent a request from both iphone and android platform to nginx server nginx/0.7.67 + Phusion Passenger 2.2.15 with this code in ruby: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; but when I check on /conf/nginx.conf file I didn't see any log appears. However when I request from computer browser, I can see the log in nginx.conf file. Any one has met this problem and please give me some suggestions? Thank you.

    Read the article

  • Why isn't my log4net appender buffering?

    - by Eric
    I've created a custom log4net appender. It descends from log4net.Appender.SmtpAppender which descends from log4net.Appender.BufferingAppenderSkeleton. I programatically setup the following parameters in its constructor: this.Lossy = false; //don't drop any messages this.BufferSize = 3; //buffer up to 3 messages this.Threshold = log4net.Core.Level.Error; //append messages of Error or higher this.Evaluator = new log4net.Core.LevelEvaluator(Level.Off); //don't flush the buffer for any message, regardless of level I expect this would buffer 3 events of level Error or higher and deliver those events when the buffer is filled. However, I'm finding that the events are not buffered at all; instead, SendBuffer() is called immediately every time an error is logged. Is there a mistake in my configuration? Thanks

    Read the article

  • git: changelog day by day

    - by takeshin
    How to generate changelog of commits groupped by date, in format: [date today] - commit message1 - commit message2 - commit message3 ... [date day+3] - commit message1 - commit message2 - commit message3 ... (skip this day if no commits) [date day+1] - commit message1 - commit message2 - commit message3 ... [date since] - commit message1 - commit message2 - commit message3 Any git log command, or smart bash script?

    Read the article

  • I want to trace logs using a Macro multi parameter always null. problem c++ windows

    - by sxingfeng
    I am using the following way to cout a function's time: #define TIME_COST(message, ...)\ char szMessageBuffer[2048] = {0};\ va_list ArgList;\ va_start(ArgList, message);\ vsprintf_s(szMessageBuffer, 2048, message, ArgList);\ va_end(ArgList); \ string strMessage(szMessageBuffer);\ CQLogTimer t(strMessage); // CQLogTimer is a self destructor,which will cout life time of its own and print szMessageBuffer. However when I use the macro this : void fun { TIME_COST("hello->%s", filePath); XXXXXX } The message generated always is hello-(null) Can Any one help? Many thanks!

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Does a rails production.log store indefinitely?

    - by Trip
    If it doesn't, what's the half-life of it? It it does, where can find that information? On my server I found a few logs for each of my releases. But they only date back a few days. Specifically, I am looking for emails that were sent while my mail server was down two weeks ago.

    Read the article

  • Rails log shows unexpected data as to the time spent on a DB stuff

    - by Arhimed
    I'm running on WinXP + Ruby 1.8.6 + Rails 2.3.5 (frozen to the project) in development environment. Looking at development.log I observe inconsistent data as to the time spent on a database stuff. Example #1 (good): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:54) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (563.0ms) SHOW FIELDS FROM `cities` City Load (15.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 953ms (DB: 578) | 302 Found [http://xyz/] Example #2 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:36) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (0.0ms) SHOW FIELDS FROM `cities` City Load (0.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 47ms (DB: 32) | 302 Found [http://xyz/] Example #2 shows 32ms were spent on DB while there were just 2 sql querries and both of zero time spent. Example #3 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 11:21:24) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (63.0ms) SHOW FIELDS FROM `cities` City Load (62.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 1187ms (DB: 297) | 302 Found [http://xyz/] Example #3 shows 297ms while there were querries of 63ms and 62ms (125ms in total). Can't understand it. Could someone explain? Thanks in advance.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >