Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 6/796 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Static analysis tool customization for any language

    - by Sam
    Hi, We are using a Tool in our project. This tool has its own language which is similar to Java. I am looking for a static analysis tool which can be applied to the new language. Are there any static analysis tools which can be customized to any languages? or Is there any document or any reference on how to develop the static analysis tool for our own languages? Thanks.

    Read the article

  • Static analysis framework for eclipse?

    - by autobiographer
    i just wanted to use eclipse tptp, a framework for static code analysis but the support for code analysis ended with tptp 4.5.0. 1. it seems that this version can not be integrated into the current eclipse galileo. am i right? 2. which language independant framework for eclipse would you use as an alternative for tptp static analysis which works with eclipse galileo?

    Read the article

  • Analysis and Design for Functional Programming

    - by edalorzo
    How do you deal with analysis and design phases when you plan to develop a system using a functional programming language like Haskell? My background is in imperative/object-oriented programming languages, and therefore, I am used to use case analysis and the use of UML to document the design of program. But the thing is that UML is inherently related to the object-oriented way of doing software. And I am intrigued about what would be the best way to develop documentation and define software designs for a system that is going to be developed using functional programming. Would you still use use case analysis or perhaps structured analysis and design instead? How do software architects define the high-level design of the system so that developers follow it? What do you show to you clients or to new developers when you are supposed to present a design of the solution? How do you document a picture of the whole thing without having first to write it all? Is there anything comparable to UML in the functional world?

    Read the article

  • Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    - by matt_tm
    Our PHP application is installed as 'root' on a Redhat5/CentOS system at: /var/www/html/beta/ After disabling SELINUX in order to allow these scripts to execute other programs on the system - http://serverfault.com/questions/192951/what-permissions-are-needed-to-run-a-system-command-within-a-php-script-that-wr I faced the error that the Apache error_log showed this: Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    Read the article

  • Boost.Log - Multiple processes to one log file?

    - by Kevin
    Reading through the doc for Boost.Log, it explains how to "fan out" into multiple files/sinks pretty well from one application, and how to get multiple threads working together to log to one place, but is there any documentation on how to get multiple processes logging to a single log file? What I imagine is that every process would log to its own "private" log file, but in addition, any messages above a certain severity would also go to a "common" log file. Is this possible with Boost.Log? Is there some configuration of the sinks that makes this easy? I understand that I will likely have the same "timestamp out of order" problem described in the FAQ here, but that's OK, as long as the timestamps are correct I can work with that. This is all on one machine, so no remote filesystem problems either.

    Read the article

  • How do I show a log analysis in Splunk?

    - by Vinod K
    I have made my ubuntu server a centralized log server...I have splunk installed in the /opt directory of the ubuntu server. I have one of the another machines sending logs to this ubuntu server..In the splunk interface i have added in the network ports as UDP port 514...and also have added in the "file and directory" /var/log. The client has also been configured properly...How do I show analysis of the logs??

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • apache log analysis tools for multiple virtual hosts?

    - by shreddd
    I am interested in trying to get a side by side usage comparison of all the virtual hosts being served up by my apache server. In the simplest case, I want to see a list (or bar chart) with each virtual host and the number of requests/traffic on that site. I've been playing around with webalyzer and awstats but I haven't been able to compare multiple virtual hosts in the same infographic. Anyone have any suggestions on tools for doing this (or how I might use the above tools to do so)?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • Apache SSL Log Incomplete SSL Handshake

    - by Raymond Berg
    Scenario: We're running some experiments in our classroom around trusted connections and SSL, and I want to demonstrate the SSL handshake request on a man-in-the-middle attack. I have an Apache server with a self-signed cert. Everything works fine, but the logging seems incomplete as there is no way to get a list of SSL attempts. Once the client accepts the 'exception', I get normal access log messages for every request. However, I need to know what ssl request caused it to fail. Here are my log directives: LogLevel warn ErrorLog logs/ssl_error_log CustomLog logs/ssl_access_log combined #the combined is your average custom log My desire is a list of every SSL handshake attempted. What am I missing that could produce something like the following? (Obviously the exact words aren't needed, but in the ballpark) 0/0/0 00:00:00 - 192.168.1.10 - hijk.lmnop.edu - SSL Mismatch

    Read the article

  • Permissions needed to read event log messages remotely?

    - by Neolisk
    When running under a limited account, local event log messages are displaying fine, for remote computer I am getting this error: The description for Event ID ( xxxxx ) in Source ( yyyyy ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: zzzzz. Same remote computer works fine under domain administrator. I am currently experimenting with just the Event Viewer, by using Run As. Original issue is a PowerShell script which does Get-EventLog. Are there any special permissions that need to be in place to able to read event log messages remotely? Supposedly there is a simple solution in Windows 2008 and higher, i.e. just add user to Event Log Readers group. Is there anything like that for Windows 2003?

    Read the article

  • Windows 2008 server unaccessible without traces in the event log

    - by Rob
    I am trying to figure out why a Windows 2008 server became inaccessible in terms of RDP and access to a web application. The server was turned off and then on. Look at the event log at the time it went offline, I can't find anything. And looking at misc application logs, the system was running like normal after it went offline. It has to be said that by mistake the firewall was switched off earlier, so a lot of attempts had been done to access the SQL Server with the sa user as well as RDP login. But the attempts has been going on for days, so nothing new about that. Besides the event logs, is there anywhere else I can go to examine the cause of this? I am also in doubt whether or not a DOS attack or similar would show up in the event log. From a log for a backup application running on this server I can see that an attempt was done to access a remote IP after the server went offline, but got no connection.

    Read the article

  • How to add recently set cookies to nginx's access log

    - by etoleb
    I'd like to include cookie data in an nginx access log like so: (simplified example) log_format foo '$remote_addr "$request" $cookie_bar'; access_log /var/log/nginx/access.log foo; This works great on requests that already have a cookie "bar", but for the first request to my server nginx will report "-" as the value of "bar". It seems like my problem is that nginx is looking at the request headers for the cookie value. Is there a way check for a Set-Cookie in the response and use that as a fallback?

    Read the article

  • How to add recently set cookies to nginx's access log

    - by etoleb
    I'd like to include cookie data in an nginx access log like so: (simplified example) log_format foo '$remote_addr "$request" $cookie_bar'; access_log /var/log/nginx/access.log foo; This works great on requests that already have a cookie "bar", but for the first request to my server nginx will report "-" as the value of "bar". It seems like my problem is that nginx is looking at the request headers for the cookie value. Is there a way check for a Set-Cookie in the response and use that as a fallback?

    Read the article

  • Linux monitor logs and email alerts?

    - by Physikal
    I have a server with a faulty power button that likes to reboot itself. Usually there are warning signs, like the acpid log file in /var/log starts spamming garbage for about 10hrs or so. Is there an easy way I can have something monitor the acpid log and email me when it has new activity? I wouldn't consider myself extremely advanced so any "guides" you may have for accomplishing something like this would be very helpful and much appreciated. Thank you!

    Read the article

  • Windows Server 2008 Send Error Message on Event Log Error

    - by erich
    We currently have a Windows Server 2008 machine that we have configured with a Custom Task to send an email whenever an error occurs in a certain Event Log. The trigger works perfectly, and sends emails whenever we need them to. HOWEVER, we cannot find a way to get the email to contain information about the error, particularly the error message. Is there any way to have the message change based on the contents of the event-log error?

    Read the article

  • Log requests in apache upon arrival

    - by Patrick Cornelissen
    Is it possible to log requests in the apache when they arrive not when they have been processed? We have a pretty big web based application and the way it's currently logged is sometimes confusing to read, because each request with it's subrequests is logged "upside down", because apache logs the request after it has been processed, so the initial request is the last entry for this "query". We're wondering if there is a chance to log on arrival?!

    Read the article

  • Log Aggregation solutions

    - by pdaddy
    Hello, We are currently evaluating log aggregation solutions at my company. I understand that Splunk is one of the best solutions, but what are some of the "negatives" with using Splunk? Is there anything else out there that maybe does a better job of log aggregation?

    Read the article

  • Log parser for ftp server

    - by Sergei
    Can anyone suggest good log reporting software for Proftpd? I am looking for something at least as good as http://xferlogdb.sourceforge.net where log is fed into the database and dynamic web pages are built to retrieve historical data and statistics per user, time period and so on. Xferlogdb is very helpful , but unfortunately latest release is 2004

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >