Search Results

Search found 4489 results on 180 pages for 'logging'.

Page 20/180 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How do you enable syslogd to accept incoming connections on Snow Leopard from remote loggers?

    - by Emmel
    How do I get syslogd to accept incoming connections from remote hosts on Snow Leopard? I'd like to centralize logging such that various devices and systems send logs to Snow Leopard's syslogd, which normally hangs out on UDP 514. However, I'm unable to get them to successfully be accepted by good ole syslogd. I tcpdumped on the Snow Leopard box to verify that packets are being spouted to port 514 -- they are. I checked that syslogd is listening on 514 -- it's not. Googling around told me that, on older versions of OSX (don't you love the way things change so rapidly on OSX), one just had to add a flag to the syslogd daemon to allow remote; one did this in com.apple.syslogd.plist. However the syslogd daemon has no flags (at least in its man page) that suggests any remote anything. What's the solution to this? Secondary, less import but relevant question: What's 'newsyslog'? I see a plist file but it's not running (apparently). Thanks

    Read the article

  • Simple recursive DNS resolver for debugging (app or VM)

    - by notpeter
    I have an issue which I believe is caused by incorrect DNS queries (doubled subdomains like _record.host.subdomain.tld.subdomain.tld) when querying for SRV records. So I need to an alternate DNS server with heavy logging so I can see every query (especially stupid ones), acting as a recursive resolver with the ability create records which override real DNS records so I can not only find the records it's (wrongly) looking for, but populate those records as well. I know I could install a DNS server on yet another linux box, but I feel like this is the sort of thing that someone may already setup a simple python script or single use vm just for this purpose.

    Read the article

  • Smarter System Alerts

    - by mellowsoon
    We have a pretty simple system setup, where I get text messages when there is a system problem. It's nothing fancy. I send an email to my phone number within my logging class for alert levels. It works well enough, but it has one major flaw: A small hiccup in the system/site can turn into dozens of rapid fire text messages. Sometimes non-stop text messages until I log into the system and fix the problem. So I'm looking for pointers on software or services I can use that deal with alerts in a smarter way. Perhaps something that only sends me alerts X number of times within Y number of minutes. I'm not looking for a full monitoring suite. We already deal with that in house. I'm only looking to tackle this single problem.

    Read the article

  • Apache SSL Log Incomplete SSL Handshake

    - by Raymond Berg
    Scenario: We're running some experiments in our classroom around trusted connections and SSL, and I want to demonstrate the SSL handshake request on a man-in-the-middle attack. I have an Apache server with a self-signed cert. Everything works fine, but the logging seems incomplete as there is no way to get a list of SSL attempts. Once the client accepts the 'exception', I get normal access log messages for every request. However, I need to know what ssl request caused it to fail. Here are my log directives: LogLevel warn ErrorLog logs/ssl_error_log CustomLog logs/ssl_access_log combined #the combined is your average custom log My desire is a list of every SSL handshake attempted. What am I missing that could produce something like the following? (Obviously the exact words aren't needed, but in the ballpark) 0/0/0 00:00:00 - 192.168.1.10 - hijk.lmnop.edu - SSL Mismatch

    Read the article

  • Log centralization, display, transport and aggregation at scale v2

    - by Eric DANNIELOU
    This is a duplicate question of Log transport and aggregation at scale and http://stackoverflow.com/questions/1737693/whats-the-best-practice-for-centralised-logging, but the answers might differ now : The softwares described in 2009 may have changed since (for example Octopussy evolved from version 0.9 to 1.0.5). Rsyslog has become the default on most linux distro. Requirements have changed (security, software configuration management, ...). I'd like to ask the following questions : How do you centralize, display and archive system logs? How would you like to do it now if you had to? Most linux distro use rsyslog nowadays, which can provide reliable log transport. But some older unices, network devices and maybe windows box still use old udp rfc-style transport. How did you manage to get reliable transport? Storing logs for a few months can represent a huge amount of disk space. How do you store them? rdbms? Compressed and encrypted text files?

    Read the article

  • How do you enable syslogd to accept incoming connections on Snow Leopard from remote loggers?

    - by Emmel
    How do I get syslogd to accept incoming connections from remote hosts on Snow Leopard? I'd like to centralize logging such that various devices and systems send logs to Snow Leopard's syslogd, which normally hangs out on UDP 514. However, I'm unable to get them to successfully be accepted by good ole syslogd. I tcpdumped on the Snow Leopard box to verify that packets are being spouted to port 514 -- they are. I checked that syslogd is listening on 514 -- it's not. Googling around told me that, on older versions of OSX (don't you love the way things change so rapidly on OSX), one just had to add a flag to the syslogd daemon to allow remote; one did this in com.apple.syslogd.plist. However the syslogd daemon has no flags (at least in its man page) that suggests any remote anything. What's the solution to this? Secondary, less import but relevant question: What's 'newsyslog'? I see a plist file but it's not running (apparently). Thanks

    Read the article

  • Monitor the shell activity of a user on your Unix system?

    - by Joseph Turian
    Trust, but verify. Let's say I want to hire someone a sysadmin, and give them root access to my Unix system. I want to disable X windows for them, only allow shell usage (through SSH, maybe), so that all operations they perform will be through the shell (not mouse operations). I need a tool that will log to a remote server all commands they issue, as they issue them. So even if they install a back door and cover their tracks, that will be logged remotely. How do I disable everything but shell access? Is there a tool for instantaneously remotely logging commands as they are issued?

    Read the article

  • Hacking prevention, forensics, auditing and counter measures.

    - by tmow
    Recently (but it is also a recurrent question) we saw 3 interesting threads about hacking and security: My server's been hacked EMERGENCY. Finding how a hacked server was hacked File permissions question The last one isn't directly related, but it highlights how easy it is to mess up with a web server administration. As there are several things, that can be done, before something bad happens, I'd like to have your suggestions in terms of good practices to limit backside effects of an attack and how to react in the sad case will happen. It's not just a matter of securing the server and the code but also of auditing, logging and counter measures. Do you have any good practices list or do you prefer to rely on software or on experts that continuously analyze your web server(s) (or nothing at all)? If yes, can you share your list and your ideas/opinions?

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Filtering Client IP from Access Log for Urchin

    - by Ram Prasad
    I have some apache logs to process, and since the webserver behind two levels of reverse proxies, I am getting two IPs in the X-Forwarded-For header.. for example: 208.34.234.55, 127.0.0.1 - - [29/Oct/2009:21:38:13 -0500] "GET /monkey.html HTTP/1.0" 200 20845 0 0 "http://www.monkey.com/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)" Now, how do I filter this in Urchin (or remove this in Apache logging) so, 127.0.0.1 is removed from processing. Currently urchin is not able to recognize the multuple IP address so it does not log the remote IP

    Read the article

  • Preventing logrotate's dateext from overwriting files

    - by Thirler
    I'm working with a system where I would like to use the dateext function of logrotate (or some other way) to add the date to a logfile when it is rotated. However in this system it is important that no logging is missing and dateext will overwrite any existing files (which will happen if logrotate is called twice on a day). Is there a reliable way to prevent dateext to overwrite existing files, but instead make another file?. It is acceptable that either no rotate happens or a file is created with a less predictable name (date with an extra number, or the time or something).

    Read the article

  • Nothing is written in php5-fpm.log

    - by jaypabs
    I have two servers which is Ubuntu 12.04 and Ubuntu 14.04. When I use Ubuntu 14.04 in my new server and enabled the php-fpm log file found under /etc/php5/fpm/php-fpm.conf that reads as follows: error_log = /var/log/php5-fpm.log I noticed that most of the log that I found in Ubuntu 12.04 is not written in 14.04. For example, if I restart php5-fpm in my Ubuntu 12.04, a restart log is being written, however, this does not happen in 14.04. Another log which I missed in 14.04 are the following: [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 118098 exited with code 0 after 12983.480191 seconds from start [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 147653 started [23-Aug-2014 17:27:31] WARNING: [pool web8] child 76743, script '/var/www/mysite.com/web/wp-comments-post.php' (request: "POST /wp-comments-post.php") executing too slow (12.923022 sec), logging I really wanted to have this kind of log so I will know the length of time a slow script has executed. Does anyone know if there are other settings in Ubuntu 14.04 that I need to change in addition to /etc/php5/fpm/php-fpm.conf?

    Read the article

  • Iptables: how do I LOG what's not being ACCEPTED and limit what gets logged?

    - by Kris
    How do I log what's not being accepted by the following rule: iptables -A OUTPUT -p icmp --icmp-type 3 -m -limit --limit 10/minute -j ACCEPT And how do I limit what's being logged because I don't want to log 1000s of pings? My first thought was: iptables -A OUTPUT -p icmp --icmp-type 3 -m -limit --limit 50/day -j LOG iptables -A OUTPUT -p icmp --icmp-type 3 -m -limit --limit 10/minute -j ACCEPT But that doesn't seem right to me. I think this limits the logging to 50/day but not necessarily what is not being accepted, or am I wrong?

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • "Punch card" application

    - by icelava
    Our main-con manager is looking for a "punch card" type of attendance-logging application. We need to take attendance every day, and the most "automatic" method is simply to track when people unlock their Windows desktop screen (not logon, because many simply leave the computer on indefinitely), and report to a remote location/repository, where the administrator will be able to observe which users unlocked their screens each day. Has anybody come across such an application suite? It likely has to be a Windows service so that it operates regardless of who is logged into the system.

    Read the article

  • Linux: Tool to monitor every process, execute-command, shortly, monitor what's happening at the moment

    - by Bevor
    Hello, due to a freeze problem of my Ubuntu 10.10 (it is not isolatable) I though about logging every executable of the kernel somehow in any file to see what happens last when a freeze occures the next time to not lose valuable information. I found acct but this is obviously not what I'm looking for. Actually it logs just user commands and those things. I need something which logs in a much "deeper" level. The best would be some kind of script which records every interrupt. Does anybody know some tool like that?

    Read the article

  • Postfix - searching emails (logstash, greylog or other solution)

    - by Yarik Dot
    We are currently having ~100 servers and all of them are using remote syslog, so we have aggregated all logs on one server. The most questioned problem from our support team is: Has an email from .... to ... been delivered? I'd like to give to our support team access to some logging tool and some guide for searching in logs. What would you have recommended me? Or, do you know any other alternatives to test? The problem of grepping logs is that there is not sender and recipient address on one line. So I supposed, there might by some aggregation by email id.

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How to log php error in a separate file?

    - by Question Overflow
    I just did an upgrade of my server to Fedora 17 and merged some configuration files containing .rpmnew into the existing ones. I have been successfully logging my php errors in a separate log file by keeping the following in php.ini: log_errors = On error_log = /var/log/php-errors.log I am not sure why the errors are being logged to /var/log/httpd/error_log after the upgrade despite keeping the settings above. Also, $ ls -l /var/log/php-errors.log -rwxrwxr--. 1 apache myself 232 Dec 13 16:49 /var/log/php-errors.log shows that apache did own the php error log file. What could be causing PHP errors to be logged into apache error log file?

    Read the article

  • log4j additivity, category logging level and appender threshold

    - by GBa
    I'm having troubles understanding the relation between additivity, category logging level and appender threshold... here's the scenario (my log4j.properties file): log4j.category.GeneralPurpose.classTypes=INFO, webAppLogger log4j.additivity.GeneralPurpose.classTypes=true log4j.category.GeneralPurpose=ERROR, defaultLogger log4j.additivity.GeneralPurpose=false log4j.appender.webAppLogger=org.apache.log4j.RollingFileAppender log4j.appender.webAppLogger.File=webapps/someWebApp/logs/webApp.log log4j.appender.webAppLogger.MaxFileSize=3000KB log4j.appender.webAppLogger.MaxBackupIndex=10 log4j.appender.webAppLogger.layout=org.apache.log4j.PatternLayout log4j.appender.webAppLogger.layout.ConversionPattern=%d [%t] (%F:%L) %-5p - %m%n log4j.appender.webAppLogger.Encoding=UTF-8 log4j.appender.defaultLogger=org.apache.log4j.RollingFileAppender log4j.appender.defaultLogger.File=logs/server.log log4j.appender.defaultLogger.MaxFileSize=3000KB log4j.appender.defaultLogger.MaxBackupIndex=10 log4j.appender.defaultLogger.layout=org.apache.log4j.PatternLayout log4j.appender.defaultLogger.layout.ConversionPattern=%d [%t] (%F:%L) %-5p - %m%n log4j.appender.defaultLogger.Encoding=UTF-8 insights: category GeneralPurpose.classTypes is INFO category GeneralPurpose.classTypes has additivity TRUE category GeneralPurpose is ERROR category GeneralPurpose has additivity FALSE with the current configuration I would have assumed that INFO messages sent to category GeneralPurpose.classTypes.* would be only logged to webAppLogger since the parent logger (cateogry) is set with ERROR level logging. However, this is not the case, the message is logged twice (one in each log file). Looks like the ERROR logging level for the parent category is not taken into consideration when the event is sent as part of additivity... is my observation correct or am I missing something ? how should I alter the configuration in order to achieve only ERROR level loggings in server.log ? thanks, GBa.

    Read the article

  • Logging class using delegates (NullReferenceException)

    - by Leroy Jenkins
    I have created a small application but I would now like to incorporate some type of logging that can be viewed via listbox. The source of the data can be sent from any number of places. I have created a new logging class that will pass in a delegate. I think Im close to a solution but Im receiving a NullReferenceException and I don’t know the proper solution. Here is an example of what Im trying to do: Class1 where the inbound streaming data is received. class myClass { OtherClass otherClass = new OtherClass(); otherClass.SendSomeText(myString); } Logging Class class OtherClass { public delegate void TextToBox(string s); TextToBox textToBox; Public OtherClass() { } public OtherClass(TextToBox ttb) { textToBox = ttb; } public void SendSomeText(string foo) { textToBox(foo); } } The Form public partial class MainForm : Form { OtherClass otherClass; public MainForm() { InitializeComponent(); otherClass = new OtherClass(this.TextToBox); } public void TextToBox(string pString) { listBox1.Items.Add(pString); } } Whenever I receive data in myClass, its throwing an error. Any help you could give would be appreciated.

    Read the article

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • Logback to log different messages to two files

    - by Aly
    I am using logback/slf4j to do my logging. I want to parse my log file to analyze some data, so instead of parsing a great big file (mostly consisting of debug statements) I want to have two logger instances which each log to a separate file; one for analytics and one for all purpose logging. Does anyone know if this is possible with Logback, or any other logger for that matter?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >