Search Results

Search found 17971 results on 719 pages for 'log analyzer'.

Page 3/719 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Activity log manager is not preventing Zeitgeist from logging files

    - by Vivek
    I am running Gnome Shell and I do not like Zeitgeist indexing all my files. This makes the search in dash very slow. I do not want the dash to search recent files, so I installed activity log manager to prevent zeitgeist's logging activity. I configured the log manager as below. But even after adding every folder, the files keep appearing in the dash under Recent Items. Is there any other software or tweak which will instruct zeitgeist to search only applications installed in my system and not my recent files.

    Read the article

  • Schedule sending mail of a log file content

    - by user3215
    I have postfix mail agent installed and I've configured gmail relay and I could send mails from the terminal as below: root@statino1:~# mail -s "subject_here" [email protected] CC: <hit enter for empty cc> Type the mesage here press Ctrl+d I have to send a log file contents as a mail and schedule it to run everyday. How do I send log file contents as mail message, how do I automate the inputs of mail command? so that I can schedule it. Anybody has any idea?

    Read the article

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • How do I fix a custom Event Viewer Log that merges automatically with the Application log?

    - by NightOwl888
    I am trying to create a custom event log for a Windows Service on Windows Server 2003. I would like to name the custom log "(ML) Startup Commands". However, when I add a registry key with that name to HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\, it adds a log but shows the exact same events that are in the Application log when looking in the event viewer. If I add a registry key with the name "(ML) Startup Commands 2" to the event log, it shows a blank event log as expected. In fact, any other name will work correctly except for the one I want. I have searched through the registry for other keys with the string "(ML)" and removed all other references to this key name, however I continue to get merged results in the viewer when I create a key with this name. My question is, how can I fix the server so I can create a custom event log with this name that shows only the events from my application, not the events from the default Application event log that is installed with Windows? Update: I rebooted the server and woudn't you know it, the log started acting normally. I got a strange error message in the Application log: The EventSystem sub system is suppressing duplicate event log entries for a duration of 86400 seconds. The suppression timeout can be controlled by a REG_DWORD value named SuppressDuplicateDuration under the following registry key: HKLM\Software\Microsoft\EventSystem\EventLog. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I can only hope this error doesn't mean the problem will come back after 86400 seconds. I guess I will have to wait and see.

    Read the article

  • --log-slave-updates is OFF but updates received from master are still logged to slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the updates that are received from a master to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something?

    Read the article

  • few questions on packet sniffer/analyzer

    - by user37652
    I have few questions about packet sniffer. I'm using a zyxel p-600 series modem and a hub to distribute the internet connection. Can I use a packet sniffer here to determine if the user is downloading something? Can I determine if the user is downloading a file based on the modem alone.(The lights blink faster) Is there an application that I could use for the modem or the hub to limit or avoid direct downloads. Details: OS: Ubuntu 9.10 and Windows 7

    Read the article

  • syslog not showing log levels in messages

    - by user837208
    Here is sample output of my syslog messages in /var/log/syslog : Nov 15 20:20:48 ubuntu winbindd[915]: [2011/11/15 20:20:48.940063, 0] winbindd/idmap_tdb.c:287(idmap_tdb_open_db) Nov 15 20:20:48 ubuntu winbindd[915]: Upgrade of IDMAP_VERSION from -1 to 2 is not possible with incomplete configuration How do I see what was the level of message? like info, warn, error etc? I am using Ubuntu 10.04 LTS with rsyslog package version 5.8.1-1ubuntu2

    Read the article

  • Static Analyzer says I have a leak....why?

    - by Walter
    I think this code should be fine but Static Analyzer doesn't like it. I can't figure out why and was hoping that someone could help me understand. The code works fine, the analyzer result just bugs me. Coin *tempCoin = [[Coin alloc] initalize]; self.myCoin = tempCoin; [tempCoin release]; Coin is a generic NSObject and it has an initalize method. myCoin is a property of the current view and is of type Coin. I assume it is telling me I am leaking tempCoin. In my view's .h I have set myCoin as a property with nonatomic,retain. I've tried to autorelease the code as well as this normal release but Static Analyzer continues to say: 1. Method returns an Objective-C object with a +1 retain count (owning reference) 2. Object allocated on line 97 is no longer referenced after this point and has a retain count of +1 (object leaked) Line 97 is the first line that I show.

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • How to keep haproxy log messages out of /var/log/syslog

    - by itsadok
    I set up haproxy logging via rsyslogd using the tips from this article, and everything seems to be working fine. The log files get the log messages. However, every log message from haproxy also shows up at /var/log/syslog. This means that once the server goes live, the syslog will be quite useless, as it will be run over with haproxy log messages. I would like to filter out those messages from /var/log/syslog. After going over the rsyslogd documentation, I tried to change the file /etc/rsyslog.d/50-default.conf thus: *.*;auth,authpriv.none;haproxy.none -/var/log/syslog I simply added the ;haproxy.nonepart. After restarting rsyslogd it stopped working completely until I reverted my changes. What am I doing wrong?

    Read the article

  • Book review: SQL Server Transaction Log Management

    - by Hugo Kornelis
    It was an offer I could not resist. I was promised a free copy of one of the newest books from Red Gate Books , SQL Server Transaction Log Management (by Tony Davis and Gail Shaw ), with the caveat that I should write a review after reading it. Mind you, not a commercial, “make sure we sell more copies” kind of review, but a review of my actual thoughts. Yes, I got explicit permission to be my usual brutally honest self. A total win/win for me! First, I get a free book – and free is always good,...(read more)

    Read the article

  • weird postgresql log entries

    - by hyperboreean
    I am trying to figure out why I get some weird entries in my postgresql log after I do a restart: 2010-05-14 11:30:25 EEST LOG: database system was shut down at 2010-05-14 11:30:22 EEST 2010-05-14 11:30:25 EEST LOG: autovacuum launcher started 2010-05-14 11:30:25 EEST LOG: database system is ready to accept connections 2010-05-14 11:30:25 EEST LOG: incomplete startup packet 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress 2010-05-14 11:30:40 EEST LOG: could not receive data from client: Connection reset by peer 2010-05-14 11:30:40 EEST LOG: unexpected EOF on client connection First, there's the 2010-05-14 11:30:25 EEST LOG: incomplete startup packet which bugs me. Anyone has any idea why this happens? And also, this one is very strange: 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress ...

    Read the article

  • Unable to log iptables

    - by ActuatedCrayon
    I'm having trouble getting iptables to log to any file. My iptables looks like: Chain INPUT (policy ACCEPT 1366 packets, 433582 bytes) pkts bytes target prot opt in out source destination 869 60656 LOG icmp -- venet0 * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 7 Syslogd is the only log helper running. The default syslog.conf didn't work, so I tried adding "kern.=debug -/var/log/iptables.log". But the file already has "kern.* -/var/log/kern.log". There are recent syslog entries, so it's not a permissions thing. I'm running Ubuntu 12.04.1 with 2.6.32-042stab061.2

    Read the article

  • Using DEBUG Mode in Oracle SQL Developer to Log SQL

    - by thatjeffsmith
    Curious how we’re getting the data you see in SQL Developer when you click on something? While many of the dialogs provide a ‘SQL’ panel that shows you the SQL ABOUT to be generated, I’d rather see the SQL AS it’s executed. True, you could set a TRACE or fire up a Monitor Sessions report, but both of those solutions leave me hungry for more. Did you know that SQL Developer has a ‘debug’ mode? It slows the tool down a bit and spits out a lot of information you don’t care about, but it ALSO shows you ALL the SQL that is sent to the database, as you click around the tool! See ALL the SQL that SQL Developer sends to the database on your behalf Enable DEBUG Mode When you see the splash screen as SQL Developer fires up, frantically hit Up, Up, Down, Down, Left, Right, Left, Right, B, A, SELECT, Start. Wait, wrong game. No, all you need to do is go to your SQL Developer directory and navigate down to the ‘bin’ directory. In that directory, find the ‘sqldeveloper.conf’ file. Install Directory - sqldeveloper - bin - sqldeveloper.conf Open it with a text editor. Find this line IncludeConfFile sqldeveloper-nondebug.conf And replace it with this line IncludeConfFile sqldeveloper-debug.conf Save the file. Start up SQL Developer. Observe the Logging Page – Log Panel for the SQL There’s going to be more than just SQL here. You’ll actually see a LOT of other information. If you’re having general problems with the tool and you want to see the nitty-gritty of what’s going on, then this is a good place to satisfy your curiosity and might help us diagnose your issue if you post to the forums or open a ticket with My Oracle Support. You’ll find ‘INFO’ entries that look a little something like this - This is the query used to populate your Tables list in the connection tree. You can double-click on the sql text and get a pop-up window that’s much easier to read. See all that typing we’re saving you? I don’t recommend running in DEBUG mode all the time. Capturing this information and displaying it is more expensive than not doing so. And it provides a lot of information you don’t normally need to see. But when you DO want to know what’s going on and why, this is an excellent way of getting that information. When you’re ready to go back to ‘normal’ mode, just close SQL Developer, go back to your .conf file, and add the ‘nondebug’ bit back.

    Read the article

  • auth.log is empty (Ubuntu)

    - by Vinicius Pinto
    The /var/log/auth.log file in my Ubuntu 9.04 is empty. syslogd is running and /etc/syslog.conf content is as follows. Any help? Thanks. # /etc/syslog.conf Configuration file for syslogd. # # For more information see syslog.conf(5) # manpage. # # First some standard logfiles. Log by facility. # auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log # # Logging for the mail system. Split it up so that # it is easy to write scripts to parse these files. # mail.info -/var/log/mail.info mail.warning -/var/log/mail.warn mail.err /var/log/mail.err # Logging for INN news system # news.crit /var/log/news/news.crit news.err /var/log/news/news.err news.notice -/var/log/news/news.notice # # Some `catch-all' logfiles. # *.=debug;\ auth,authpriv.none;\ news.none;mail.none -/var/log/debug *.=info;*.=notice;*.=warning;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages # # Emergencies are sent to everybody logged in. # *.emerg * # # I like to have messages displayed on the console, but only on a virtual # console I usually leave idle. # #daemon,mail.*;\ # news.=crit;news.=err;news.=notice;\ # *.=debug;*.=info;\ # *.=notice;*.=warning /dev/tty8 # The named pipe /dev/xconsole is for the `xconsole' utility. To use it, # you must invoke `xconsole' with the `-file' option: # # $ xconsole -file /dev/xconsole [...] # # NOTE: adjust the list below, or you'll go crazy if you have a reasonably # busy site.. # daemon.*;mail.*;\ news.err;\ *.=debug;*.=info;\ *.=notice;*.=warning |/dev/xconsole

    Read the article

  • Which is best Postfix Log analyzer?

    - by Anto Binish Kaspar
    Which is best Postfix Log analyzer? We are looking for good log analyzer for postfix. We need to analyze the following How many mails queued ? How many mails not delivered ? Why mails are not delivered ? And is it possible to view the subject for the all mail status instead of message id? I mean to review the status of the single mail. We are using Sawmill analyzer now. But the management is not satisfied with the report from the sawmaill, since its missing single message status and subject.

    Read the article

  • Why does Windows Event Log stop logging events before maximum log size is reached?

    - by Tuure Laurinolli
    I have a service that produces a lot of event log output. Currently the event log is configured to overwrite any old events to keep the log from ever getting full. We have also increased the event log size considerably (to about 600 MB). Recently the service started reporting errors to its clients, and the error message it was sending to its clients is "The event log file is full". How can this be, when event log is configured to overwrite as necessary? In our hurry to get the service back up we cleared the event log without saving its contents, but most likely it had not reached 600 MB yet, judging from sizes of some earlier log dumps. There is also MS KB entry 312571, which reports that a hot fix to a similar issue is available, but the the configuration that the fix applies to is not exactly the same we have. Specifically, the fix only applies if event logs are configured to never overwrite old events. I wonder if this has something to do with the fact that the log files apparently are memory-mapped. What happens if the system runs out of address space to map files to?

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • when i load ubuntu i get the log on screen, but even with correct password i cant log on.. have looged in several times before successfully!

    - by mybox
    I have been using ubuntu 12.04 for a few months now. but have now come across a problem that i cant get past. I am stuck at the log on screen= i enter my password but i get a black screen flases up and the log on prompt reappears!!! tried using terminal prompts and i actually have loggin it- but not according to the main log onm screen. I cant get my desktop active as the log on prompt is there. when i put a wrong password in i get an invalid password message- with the correct password the log on screen just reappears!!!please help.

    Read the article

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >