Search Results

Search found 4489 results on 180 pages for 'logging'.

Page 48/180 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • SQL Server 2008 error message from stored procedure

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. When we met with such error message from stored procedure, Message 1205, Level 13, State 52, the process Pr_FooV2, Line 9 Services (Process ID 111) and another process is deadlock in the lock | communication buffer resources, and has been chosen as the deadlock victim. Rerun the transaction. I am wondering whether such messages are stored in log files? I searched log folder of my SQL Server 2008 installation root (in my environment, it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log), but can not find such files. thanks in advance, George

    Read the article

  • How to use the Zend_Log instance that was created using the Zend_Application_Resource_Log in a model

    - by Alex
    Our Zend_Log is initialized by only adding the following lines to application.ini resources.log.stream.writerName = "Stream" resources.log.stream.writerParams.mode = "a" So Zend_Application_Resource_Log will create the instance for us. We are already able to access this instance in controllers via the following: public function getLog() { $bootstrap = $this->getInvokeArg('bootstrap'); //if (is_null($bootstrap)) return false; if (!$bootstrap->hasPluginResource('Log')) { return false; } $log = $bootstrap->getResource('Log'); return $log; } So far, so good. Now we want to use the same log instance in model classes, where we can not access the bootstrap. Our first idea was to register the very same Log instance in Zend_Registry to be able to use Zend_Registry::get('Zend_Log') everywhere we want: in our Bootstrap class: protected function _initLog() { if (!$this->hasPluginResource('Log')) { throw new Zend_Exception('Log not enabled'); } $log = $this->getResource('Log'); assert( $log != null); Zend_Registry::set('Zend_Log', $log); } Unfortunately this assertion fails == $log IS NULL --- but why?? It is clear that we could just initialize the Zend_Log manually during bootstrapping without using the automatism of Zend_Application_Resource_Log, so this kind of answers will not be accepted.

    Read the article

  • Log your SQL in Rails application inside unit test

    - by Phuong Nguy?n
    I want to install a logger so that I can dump all executed SQL of my rails application. Problem is, such logger is associated with AbstractAdapter which initialized very soon under test mode, and thus cannot be set by my initializer code. I try to put ActiveRecord::Base.logger = MyCustomLogger.new(STDOUT) in the end of environment.rb like someone advised but it only works when being run in console environment (kicked by script/console), not when run under test mode. I wonder if there is any way to config such logger so that I will sure to be invoked under any environment (test, development, production, console)

    Read the article

  • Adding Timestamp to Java's GC messages in Tomcat 6

    - by ripper234
    I turned on Java's GC log options -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails Which print out these messages to standard output (catalina.out): 314.884: [CMS-concurrent-mark-start] 315.014: [CMS-concurrent-mark: 0.129/0.129 secs] [Times: user=0.14 sys=0.00, real=0.13 secs] 315.014: [CMS-concurrent-preclean-start] 315.016: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 315.016: [CMS-concurrent-abortable-preclean-start] 332.055: [GC 332.055: [ParNew: 17128K->84K(19136K), 0.0017700 secs] 88000K->70956K(522176K) icms_dc=4 , 0.0018660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] CMS: abort preclean due to time 352.253: [CMS-concurrent-abortable-preclean: 0.023/37.237 secs] [Times: user=0.78 sys=0.02, real=37.23 secs] How can I make these log lines appear with an actual timestamp (including date) instead of these numbers, which presumably mean "time since JVM started" ?

    Read the article

  • Creating a new log file each day in C#

    - by Jason T.
    As the title implies how can I create a new log file each day in C#? Now the program may not necessarily run all day and night but only get invoked during business hours. So I need to do two things. 1) How can I create a new log file each day? The log file will be have the name in a format like MMDDYYYY.txt 2) How can I create it just after midnight in case it is running into all hours of the night?

    Read the article

  • Jetty: How to write to access logs

    - by mdemmitt
    Hi all, In my Java servlet code, I want to be able to programatically write to the jetty access log. I am aware that jetty will automatically log every incoming HTTP request to the access log. However, my servlet needs to occasionally append it's own line to the access log. Has anyone here done something similar? Thanks!

    Read the article

  • Perl Capture and Modify STDERR before it prints to a file

    - by MicrobicTiger
    I have a perl script which performs multiple external commands and prints the outputs from STDERR and STDOUT to a logfile along with a series of my own print statements to act as documentation on the process. My problem is that the STDERR repeats ~identical prints as example below. I'd like to capture this before it prints and replace with the final result for each of the commands i run. blocks evaluated : 0 blocks evaluated : 10000 blocks evaluated : 20000 blocks evaluated : 30000 ... blocks evaluated : 3420000 blocks evaluated : 3428776 Here's how I'm getting STDOUT and STDERR my $logfile = "Logfile.log"; #log file name #--- Open log file for append if specified --- if ( $logfile ) { open ( OLDOUT, ">&", STDOUT ) or die "ERROR: Can't backup STDOUT location.\n"; close STDOUT; open ( STDOUT, ">", $logfile ) or die "ERROR: Logfile [$logfile] cannot be opened.\n"; } if ( $logfile ) { open ( OLDERR, ">&", STDERR ) or die "ERROR: Can't backup STDERR location.\n"; close STDERR; open ( STDERR, '>&STDOUT' ) or die "ERROR: failed to pass STDERR to STDOUT.\n"; } and closing them close STDERR; open ( STDERR, ">&", OLDERR ) or die "ERROR: Can't fix that first thing you broke!\n"; close STDOUT; open ( STDOUT, ">&", OLDOUT ) or die "ERROR: Can't fix that other thing you broke!\n"; How do I access the STDERR when each print is occurring to do the replace? Or prevent it from printing if it isn't the last of the batch. Many Thanks in advance.

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • Cakephp: how do I know what route was used

    - by Jason
    So I am a total cakephp newb and one of the first things I expected to see basic info about each page request logged. More specifically, what route data including what controller/method is being used. Obviously I did not find what I was expecting and about the only kind of meaning info I can find is from the apache logs. What I expected was to see something similar to first log entry for a rails app request. Does cakephp not log this kind of data?

    Read the article

  • Android Terminal and Log Dumping

    - by J3hova
    I am trying to send terminal commands programmaticly from an android activity. At the moment I'm using something like the following: Process process = null; DataOutputStream os = null; process = Runtime.getRuntime().exec("su"); os = new DataOutputStream(process.getOutputStream()); os.writeBytes("./data/program1\n"); os.writeBytes("./data/program2\n"); os.writeBytes("exit\n"); os.flush(); However, my program1 is failing to run successfully and I believe it is due to inadequate user permissions. Now for my question: Does anyone know how I can dump the terminal to a file and save it on the phone or sdcard? The program is tying into the terminal to feed it commands, I want to know a way to open a connection the otherway and access the (what is normally visual on a terminal screen) output.

    Read the article

  • Control Debug Level in C++ Library - Linux

    - by rursw1
    Hi all, I have a C++ library, which is used in both Linux and Windows. I want to enable the user to control the debug level (0 - no debug, 1 - only critical errors ... 5 - informative debug information). The debug log is printed to a text file. In Windows, I can do it using a registry value (DWORD DebugLevel). What can be a good replacement which works also for Linux? (Without 3rd party tools, for example Linux "registry"). Thanks in advance!

    Read the article

  • Squid logs on mongodb

    - by user306241
    Hi, I'm planning to log my squid instances to a mongodb, but the actual problem is that we have a huge traffic to be logged, every access authenticated with user/pass. Eventually we have to make some reports based on logs. I was thinking to insert the logs distributed by months and by users, so my collection will look like this: {month: 'april', users: [{user: 'loop0', logs: [{timestamp: 12345678.9, url: 'http://stackoverflow.com/question/ask', ... }]}] So if I want to generate my reports based on the month of april I just have to get the right month instead of looking in zillions of lines to fetch the lines that timestamp match between April, 1 and April, 30. Of course this type of insert will be slower than just insert the log line directly. So my question is: is there a best way to do this? Nowadays we have around 12 million lines of log by day.

    Read the article

  • Can I get parameter names/values procedurally from the currently executing function?

    - by Pwninstein
    I would like to do something like this: public MyFunction(int integerParameter, string stringParameter){ //Do this: LogParameters(); //Instead of this: //Log.Debug("integerParameter: " + integerParameter + // ", stringParameter: " + stringParameter); } public LogParameters(){ //Look up 1 level in the call stack (if possible), //Programmatically loop through the function's parameters/values //and log them to a file (with the function name as well). //If I can pass a MethodInfo instead of analyzing the call stack, great. } I'm not even sure what I want to do is possible, but it would be very nice to be able to automatically output parameter names/values at runtime to a file without explicitly writing the code to log them. Thanks!

    Read the article

  • How to log the raw SQL from Oracle occi C++ api?

    - by savanna
    One of our customers is complaining our application is not working. Their reasoning is that our sql function call to their Oracle database is not getting the "expected" result. Sometime, it should failed but our application get success from their database. It's really frustrating because it's their database and we cannot do any test on it. We are using the C++ Oracle OCCI API. Is there anyway we can log the raw sql from our end? That will be very helpful and we can ship the script to them and let them debug in their system to figure out the problem. Thanks in advance.

    Read the article

  • Getting information about where c++ exceptions are thrown inside of catch block?

    - by tfinniga
    I've got a c++ app that wraps large parts of code in try blocks. When I catch exceptions I can return the user to a stable state, which is nice. But I'm not longer receiving crash dumps. I'd really like to figure out where in the code the exception is taking place, so I can log it and fix it. Being able to get a dump without halting the application would be ideal, but I'm not sure that's possible. Is there some way I can figure out where the exception was thrown from within the catch block? If it's useful, I'm using native msvc++ on windows xp and higher. My plan is to simply log the crashes to a file on the various users' machines, and then upload the crashlogs once they get to a certain size.

    Read the article

  • A simple log file format

    - by hgulyan
    Hi, I'm not sure if it was asked, but I couldn't find anything like this. My program uses a simple .txt file for log purposes, It just creates/opens a file and appends lines. After some time, I started to log quite a lot of activities, so the file became too large and hardly readable. I know, that it's not write way to do this, but I simply need to have a readable file. So I thought maybe there's a simple file format for log files and a soft to view it or if you'd have any other suggestions on this question? Thanks for help in advance.

    Read the article

  • rails log rotation behaves weird (rails version 2.3.5)

    - by robodo
    I'm trying to setup log rotation in rails. I have put this in my environment/development.rb: config.logger = Logger.new("#{RAILS_ROOT}/log/#{ENV['RAILS_ENV']}.log", 1, 5*1048576) 2 files are created :-) but it looks like rails is writing to them randomly and at the same time as well. This creates messy log files :-( what am I missing?

    Read the article

  • How can I get the Forever to write to a different log file every day?

    - by user1438940
    I have a cluster of production servers running a Node.JS app via Forever. As far as I can tell, my options for log files are as follows: Let Forever do it on its own, in which case it will log to ~/.forever/XXXX.log Specify one specific log file for the entire life of the process What I'd like to do, however, is have it log to a different file every day. eg. 20121027.log, 20121028.log, etc. Is this possible? If so, how can it be done?

    Read the article

  • In ufw is there any way to disable logging for a particular rule?

    - by thomasrutter
    I am using UFW with a default logging policy of "low". I would like to keep this logging on for the default deny action, but disable it for a particular IP address only. So I'd like to create one particular new rule that doesn't have logging. Is there a way to achieve this? I have a rather uncomplicated ufw setup so far, like this: Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 22/tcp LIMIT Anywhere 80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 22/tcp ALLOW Anywhere (v6) 80/tcp ALLOW Anywhere (v6) 443/tcp ALLOW Anywhere (v6)

    Read the article

  • Is it possible to override the default logging for Glassfish v3?

    - by kgrad
    Related to this question. It appears that Glassfish is exporting slf4j into my application and overriding my logging solution. Is it possible for me to override Glassfish's logging and have my own logging solution take precedence? After searching, I have only found ways to modify the log using logging.properties. I am not married to my current implementation, but I am interested in making it work. thanks.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >