Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 17/796 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • Getting the general log to work in MySQL 5.6.8

    - by Benjamin
    I can't get the general log to work in this version of MySQL. I added the following lines to /usr/my.cnf: general_log = 1 general_log_file = "/var/log/mysql.log" Then restarted the server: [root@localhost ~]# service mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! The settings seem to be taken into account: mysql> SHOW VARIABLES LIKE 'general_log%'; +------------------+--------------------+ | Variable_name | Value | +------------------+--------------------+ | general_log | ON | | general_log_file | /var/log/mysql.log | +------------------+--------------------+ 2 rows in set (0.01 sec) But the log is never created: [root@localhost ~]# mysqladmin flush-logs [root@localhost ~]# ls -al /var/log/mysql.log ls: cannot access /var/log/mysql.log: No such file or directory Any idea why?

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Getting SEC to only monitor latest version of a log file?

    - by user439407
    I have been tasked with running SEC to help correlate PHP logs. The basic setup is pretty straightforward, the problem I'm having is that we want to monitor a log file whose name contains the date(php-2012-10-01.log for instance). How can I tell SEC to only monitor the latest version of the file(and of course switch to the newest log file every day at midnight) I could do something like create a latest version of the file that links to the latest version and run a cron job at midnight to update the link, but I am looking for a more elegant solution

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • How To View and Write To System Log Files on Ubuntu

    - by Chris Hoffman
    Linux logs a large amount of events to the disk, where they’re mostly stored in the /var/log directory in plain text. Most log entries go through the system logging daemon, syslogd, and are written to the system log. Ubuntu includes a number of ways of viewing these logs, either graphically or from the command-line. You can also write your own log messages to the system log — particularly useful in scripts. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Function inside .profile results in no log-in

    - by bioShark
    I've created a custom function in my .profile, and I've added right at the bottom, after my custom aliases : # custom functions function eclipse-gtk { cd ~/development/eclipse-juno ./eclipse_wb.sh & cd - } The function starts a custom version of my eclipse. After I've added it, because I didn't wanted to log-out/log-in, I've reloaded my profile with the command: . ~/.profile and then I've tested my function by calling eclipse-gtk and it worked without any issue. Today when I booted, I couldn't log in. After providing my password, in a few seconds I was back at the log-in screen. Dropping to command line using CTR + ALT + F1, I've commented out the function in my .profile and the log-in was possible without any issue. My question is, what did I do wrong when I wrote the function? And if there is something wrong, why did it work yesterday after reloading the profile. Thanks in advance. Using: Ubuntu 12.04

    Read the article

  • Ubuntu One Files for Android will not let me log in

    - by user20867
    I installed Ubuntu One Files on my Nexus One phone. When I tap Log in on the main screen, the app tries to log in then after a few seconds returns the following message: Log-in failed, please try again later. I have an Ubuntu One account, and when I tap Register on the main screen for Ubuntu One Files, I can log in using my phone's Web browser. But if I go back to the app and try to log in, I get the same error. Again, my phone is a Nexus One running Android 2.3.4. The phone is not rooted or modded in any way.

    Read the article

  • write client ip in iis 7.0 log over firewalls

    - by Guy Bertental
    Hi, I a solution for IIS 7.0 which runs on windows server 2008 64bit to write my clients IP to IIS logs while the server is behind firewalls and proxies (Pass X-Forwarded-For header value). I've tried to install the an ISAPI Filter written by Joe Pruitt. it works great on Windows Server 2003 32bit IIS 6.0, but seems to do nothing at all on windows server 2008 64bit IIS 7.0. Did anyone try this ISAPI filter on this version of OS? or have another solution? link to the Joe Pruitt's (from F5) ISAPI filter: http://devcentral.f5.com/weblogs/Joe/archive/2009/08/19/x_forwarded_for_log_filter_for_windows_servers.aspx best regards, guy bertental

    Read the article

  • Open Source Log Aggregators?

    - by Dean J
    I have sixteen servers using Log4J logs, accessible by ssh. I want to see the output of all logs on my desktop machine. Apache Chainsaw can presumably do this, but the documentation isn't getting me there. "Put all the jars into your ~/.chainsaw directory", got that. "Chainsaw will automatically use the functionality in those JARs"? Nope. Chainsaw isn't picking up log4j-chainsaw-vfs.jar, by the look of it, so sftp is out. Any suggestions other than Chainsaw?

    Read the article

  • iphone crash log with dSym not loading debug information

    - by AngeDeLaMort
    Hello, I was trying to see why my application crashed on the device (iPhone) using the dSym generated along the executable (in ad hoc), but I don't know why, there isn't any useful information. It seems that "Organizer" is able to find the appropriate dSym and translate some data into more readable one, but when it comes to my application, I just have an address. Since I know how to reproduce it, I've tried to setup my build so it can help me in the future. So, I've tried to find if I had all the proper flags set int the project build properties and everything seems fine. So after doing some research, it seems that all information are stripped during link time and the dSym seems completely useless. I've played with some flags, but nothing changed. So, is there something special to do in order to get the crash file human readable? Or is it impossible in the ad hoc setting? The closest thing near to work that I've done was to build a debug version and look up the address in it. At least it seems to give the right file. So, I made a sample app and here what I have: (the line I want is #4): Thread 0 Crashed: 0 libobjc.A.dylib 0x00003ebc objc_msgSend + 20 1 UIKit 0x0005c970 -[UIView dealloc] + 60 2 UIKit 0x0005c840 -[UIImageView dealloc] + 76 3 CoreFoundation 0x0003963a -[NSObject release] + 28 4 MyApplication 0x000046a6 0x1000 + 13990 5 UIKit 0x00069750 -[UIViewController view] + 44 6 MyApplication 0x000053fa 0x1000 + 17402 The crash is made using 2 successive releases on an object. Thanks in advance.

    Read the article

  • Can Apache httpd be made to log errors to console instead of log files under Windows?

    - by Vilx-
    I'm doing infrequent development with Apache/PHP on my Windows machine so I've opted to run apache as a console process instead of a service. It would be nice if errors could be logged to the console window instead of a logfile so I can see them immediately. Can this be done somehow? It doesn't seem that apache has such a capability built in and I can't find a mod that would do this either.

    Read the article

  • Can Apache httpd be made to log errors to console instead of log files?

    - by Vilx-
    I'm doing infrequent development with Apache/PHP on my Windows machine so I've opted to run apache as a console process instead of a service. It would be nice if errors could be logged to the console window instead of a logfile so I can see them immediately. Can this be done somehow? It doesn't seem that apache has such a capability built in and I can't find a mod that would do this either.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

  • Attend free workshop on 3/16: Architecture Analysis Patterns

    On Tuesday, 3/16/2010, Headspring is offering another free monthly workshop.  This month, I am leading the workshop, and the topic is: Architecture Analysis Patterns: How to reason about the structure of an application Layering, a fundamental concept of software architecture: Layer helps to separate dependencies and to decouple concerns. Most of the industry does layering in name only. It's lip service. In 23 slides and accompanying commentary, we will explore the fundamental concept of separating...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server Analysis Services 2005 crash when disk is full?

    - by squillman
    One of our SQL boxes ran itself out of disk space last night. This particular server has both the database engine and analysis services on it. Database engine was not happy about having no disk space on the volume where all the data files are, but analysis services just plain died. At least, the only thing I have to blame is the full volume. Has anyone experienced a SSAS that they've been able to directly tie to no disk space? I've got nothing else in the SQL or event logs to blame...

    Read the article

  • Free SEO Analysis using IIS SEO Toolkit

    In my spare time Ive been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites. So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Free SEO Analysis using IIS SEO Toolkit

    In my spare time Ive been thinking about new ideas for the SEO Toolkit, and it occurred to me that rather than continuing trying to figure out more reports and better diagnostics against some random fake sites, that it could be interesting to ask openly for anyone that is wanting a free SEO analysis report of your site and test drive some of it against real sites. So what is in it for you, I will analyze your site to look for common SEO errors, I will create a digest of actions to do and other...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • best way to go about cost-benefit analysis on hardware

    - by Michael
    I'm looking to build a low-end computational server (my jargon in this field is especially limited so if someone can state that better please change that to meet jargon). I'm basically running computational fluid dynamics programs, large matrix computations and bioinformatics code. What would be the best way to approach cost/benefit analysis on what to put in the system? Perhaps even more general: How does one approach cost/benefit analysis on hardware theoretically (doing the analysis before building the machine)?

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Do I need to retain Sharepoint usage analysis log files

    - by dunxd
    Our Sharepoint installation currently has 30Gb of Usage Analysis Log file - these date back about six months. I have configured Sharepoint to do Usage Analysis Processing every night, so I am wondering whether I need to keep these files for so long. Sharepoint doesn't seem to clean up these files automatically - I think six months ago I had to clear out logs due to disk space issues. So my question is, do I need to retain these files in order to get decent usage analysis reports, or can I delete them as soon as the usage analysis processing has completed?

    Read the article

  • BigQuery: Simple example of a data collection and analysis pipeline + Your questions

    BigQuery: Simple example of a data collection and analysis pipeline + Your questions Join Michael Manoochehri and Ryan Boyd live to talk about Google BigQuery. We'll give an overview of how we're using our cars, phones, App Engine and BigQuery to collect and analyze data. We'll be discussing our trusted tester feature which allows analyzing data from the App Engine datastore. We'll also review some of the more interesting questions from Stack Overflow and take questions via Google Moderator. From: GoogleDevelopers Views: 250 16 ratings Time: 26:53 More in Science & Technology

    Read the article

  • 1000 most visited sites on the web: A Google Analysis

    Google has released an analysis on the 1000 most visited sites on the web. Considering that we own/operate 3 of the top 10 sites and has a significant interest in Facebook, plus this recent report that states that Microsoft employees are the most social-media-savvy will go to great lengths to show how well we can operate in our cloud and social media integration and collaboration strategies. William Tay 2000-2010 | Swinging Technologist http://www.softwaremaker.net/blog...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >