Search Results

Search found 17616 results on 705 pages for 'uls log'.

Page 21/705 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

  • ESET Remote Administrator Console showing infected files on a client, but threat log is empty

    - by Aron Rotteveel
    We recently deployed ESET NOD32 Antivirus on our small domain network and use the Remote Adminstrator to manage everything remotely. On a recent full system scan, one of the clients shows 10 infected files of which 4 have been cleaned in the scan log. The strange thing, however, is that the threat log is empty. Is there any reason why the threat log is empty? What has happened to the 6 remaining uncleaned files? Where can I view information on what files are infected and what they have been infected with? I know this can be done through the scan log properties screen, but with 958790 files scanned, I obviously do not want to browse through this list. Any help is appreciated.

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • cron doesn't execute any task, but writes into log as executed

    - by FractalizeR
    I have strange problem on one of my servers. Cron does not execute any task, but it writes to it's log, that task has been executed successfully. Like some simulation mode is activated... Apr 30 03:03:08 nd-10049 crond[13387]: (root) CMD (php /usr/local/frb/backup.php) Apr 30 03:05:01 nd-10049 crond[13397]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:09:01 nd-10049 crond[19108]: (root) CMD (/etc/webmin/cron/tempdelete.pl ) Apr 30 03:10:01 nd-10049 crond[19467]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:14:44 nd-10049 crontab[21154]: (root) BEGIN EDIT (root) Apr 30 03:15:01 nd-10049 crond[21309]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) REPLACE (root) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) END EDIT (root) Apr 30 03:16:01 nd-10049 crond[14961]: (root) RELOAD (cron/root) Apr 30 03:20:02 nd-10049 crond[22620]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php) There are no errors about cron in common log (messages). The OS is CentOS. What can I do to check what is the problem? What can the problem be?

    Read the article

  • Best tools for "ssh tail -f" style log file monitoring and analysis

    - by dougnukem
    I'm looking for a tool to monitor custom PHP Error logs/Apache and possibly Java logs on remote development servers. I'm not looking for a full production log system like Splunk, but something that's a little more flexible than an ssh terminal doing a "tail -f". Perhaps something that will: * Monitor multiple log files to my local machine for searching/analysis later * Allow "alerts" when certain strings appear in the log * Provide some kind of tabbed/dashboard view of the multiple logs being monitored (in total less than 10 logs).

    Read the article

  • Name of log file where boot process is logged

    - by ant2009
    Hello, CentOS 5.3 After booting up. I am wondering what is the name of the log file that contains if all services where successfully loaded or not? For example when computer boots you get a list of start services and they can be OK or FAILED. Is there a log file where this information is kept? I had a look in the following directory /var/log/ but not sure which one will contain the informaiton that I need. Many thanks for any advice,

    Read the article

  • Can the mysql slow query log show milliseconds?

    - by Chase Seibert
    The mysql slow query log shows query time in whole integers. # Query_time: 0 Lock_time: 0 Rows_sent: 177 Rows_examined: 177 SELECT ... # Query_time: 1 Lock_time: 0 Rows_sent: 56 Rows_examined: 208 SELECT ... There was a microsecond patch to allow mysql to be configured to log queries that take longer than X microseconds to run. But is there a way to have the log output the query time in either milliseconds or microseconds?

    Read the article

  • hardening a server: disallow password-login for sudoers and log unusual ips

    - by Fabian Zeindl
    Two question regarding sudo-login into an ubuntu-system (debian tips welcome as well): Is it possible to require sudoers on my box to only login with publickey-authentication? Is it possible to log which ip sudoers log in from and check that for "unusual activity" or take actions? I'm thinking about temporarily removing sudo-rights if sudoers don't log in from whitelisted IPs. Or is that too risky to be exploited?

    Read the article

  • Benefits of log rotation

    - by Manfred Moser
    I have been using logrotation for years and never thought too much of it being a problem until I came across a question on stackoverflow (http://stackoverflow.com/questions/1508734/disable-java-log-rotation/) where someone wants to disable log rotation. To me with experience in having build server and even production servers cleaned up manually because logs are not rotated and discs are running out and suddenly machines come to a halt that all seems crazy, but it occurred to me that maybe it is not so obvious after all. So what are the benefits of log rotation? And what are the drawbacks (e.g. more difficult to debug/analyze maybe)? What tools do you find useful for working with rotated log files? Splunk I assume, but what else?

    Read the article

  • Squid Log Rotation and Sarg

    - by beakersoft
    We have just setup squid as our proxy, and i was going to use Sarg to analyze the log files. I had initially set the Squid logs to rotate everyday so they dont get huge. The problem is i cant see an option in the squid config to read a folder full of squid log files (say *.log). Is there an easy way to do this or am i going to have to write a bash script or something to process them all into one before i get squid to read it? Cheers Luke

    Read the article

  • Boot log for Windows XP

    - by JasCav
    Where can I find a step-by-step boot log of my Windows XP machine? I'm looking for something akin to the boot log you would get in Linux (with what is running at what times, how long it is running, etc). I am specifically interested in the what is happening after I get out of initial boot phase (AKA, the Windows XP logo goes away and I move to the generic blue background, and as I log in as a user onto the machine).

    Read the article

  • Ngingx max worker_connections and access log

    - by MotoTribe
    I'm troubleshoot an issue with my site. I'm seeing in the ngingx-error.log that the max worker_connection limit has been reached when the site went down. I'm not seeing an increase of requests during that time in the ngingx-access.log. Does that mean the mysql database had a bottleneck at that time that caused the requests to queue up? Or would it not log any requests that where made after the max worker_connection limit has been reached?

    Read the article

  • Can't remove zfs log device from pool

    - by netmano
    I run a FreeBSD 9.0 server with ZFS pool version 28 and ZFS version 5. I had two pools with a log on ssd's two partitions. These pools was created on FreeBSD 8.2 with ZFS pool version 15, and ZFS version 4. After I upgraded to the new zfs version, I tried to remove the SSD log device from both pools both command was successful (no error message). One of the pools was the log removed, but the other still there, I down the server removed the ssd physically, and hoped it will be forgot by the zpool. The zpool became degraded as ssd was missing. I tied to remove again. No error message, but the log device entry still there. After it, to became the pool online again, I created a file on the root UFS partition and replaced the missing to device to this file. It was successful, the pool again online. However I can't remove the log device from the pool. Where can I have to look for error messages? (in dmesg there is nothing about it, also the zfs remove doesn't have any error message, it's seem like it was successful.

    Read the article

  • place php errors in log file

    - by Gatura
    I am running mac 10.6.4 on an iMac and am using it as a developer server. I have Apache and Entropy php5 installed, when i write my applications, some pages wont run when php has errors, however these are not recorded on a log file, I created one php_errors.log and entered the following on the php.ini file error_log = /usr/local/php5/logs/php_errors.log However errors are not written to this file and i have log_errors = true What could be the problem

    Read the article

  • How to log nginx vhost bandwidth?

    - by bwizzy
    I'm looking for a way to be able to track the bandwidth of multiple vhosts on an nginx web server. I'm guessing there is a way that I can set up the log files to output this information and then I can write a script to parse through the log files and add up the file sizes. If that is the case does anyone know the correct log format, and if there is already a script out there that does this?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Count requests from access log for the last 7 days

    - by RoboForm
    I would like to parse an access log file and have returned the amount of requests, for the last 7 days. I have this command cut -d'"' -f3 /var/log/apache/access.log | cut -d' ' -f2 | sort | uniq -c | sort -rg Unfortunately, this command returns the amount of requests since the creation of the file and sorts it into HTTP-code categories. I would like just a number, no categories and only for the last 7 days. Thanks.

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Error while taking Transaction log backup

    - by Divya Kapoor
    Hello, I have sceduled a Transaction log back up schedule. But the backup is not happening. The error in the logs is this: Transaction Log Backup.Subplan_1,Error,0,ARCOTDB1\ARCOT_DB_INST1,Transaction Log Backup.Subplan_1,(Job outcome),,The job failed. Unable to determine if the owner (ARCOT-DB1\Superuser) of job Transaction Log Backup.Subplan_1 has server access (reason: Could not obtain information about Windows NT group /user 'ARCOT-DB1\Superuser'<c/> error code 0x534. [SQLSTATE 42000] (Error 15404)) Please help!

    Read the article

  • Script to gather all the files ending in .log and create a tar.gz file.

    - by Oscar Reyes
    I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them. find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \; The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file. For instance I have: ./product/install/install.log ./product/execution/daily.log ./other/conf/blah.log And I end up with: /tmp/allLogs/install.log /tmp/allLogs/daily.log /tmp/allLogs/blah.log And I would like to have: /tmp/allLogs/product/install/install.log /tmp/allLogs/product/execution/daily.log /tmp/allLogs/other/conf/blah.log

    Read the article

  • How does Windows 7 determine that a system was not shut doen correctly (Kernel-Pwer Event ID 41)

    - by Erik
    I have a strange situation that Windows 7 thinks it was not shut down correctly, and gives me a warning in the system event log like this: http://support.microsoft.com/kb/2028504/de What the KB-article does not tell is how Windows actually determines that situation. Does it parse its own system event log after reboot? Or where does it get that information from? I am currently investigating an issue where I believe the system fails to write the system event log correctly (it stops having entries, although other logs like the application event log still have entries), and after a reboot, Windows thinks it has not been shut doen correctly. Does anybody have any experience with this? ANd can you confrim that Windows determines the previous correct system shutdown by parsing its own system event log on startup?

    Read the article

  • SQL Server 2014 – delayed transaction durability

    - by Michael Zilberstein
    As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1 : delayed transaction durability . It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON . In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become...(read more)

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >