Search Results

Search found 17851 results on 715 pages for 'log aggregation'.

Page 4/715 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • Which logfile(s) log(s) errors reading a cd?

    - by Robert Vila
    I introduce a CD-RW, maybe blank (I really don't know), and after a little movement inside the reader, the CD is ejected without any message at all. I would like to know what is going on and the reason why it is ejected. How can I know that in Natty. The CD reader is working OK because I can read other CD's. It only gave me problems writing from Natty, a few days ago, but with MacOS there was no problem. Thank you Edit: Maybe there is no error, but then, how can I know what is in the Cd if there is anything?

    Read the article

  • Creating Entity as an aggregation

    - by Jamie Dixon
    I recently asked about how to separate entities from their behaviour and the main answer linked to this article: http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ The ultimate concept written about here is that of: OBJECT AS A PURE AGGREGATION. I'm wondering how I might go about creating game entities as pure aggregation using C#. I've not quite grasped the concept of how this might work yet. (Perhaps the entity is an array of objects implementing a certain interface or base type?) My current thinking still involves having a concrete class for each entity type that then implements the relevant interfaces (IMoveable, ICollectable, ISpeakable etc). How can I go about creating an entity purely as an aggregation without having any concrete type for that entity?

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • Real-time aggregation of files from multiple machines to one

    - by dmitry-kay
    I need a tool which gets a list of machine names and file wildcards. Then it connects to all these machines (SSH) and begins to monitor changes (appendings to the end) in each file matched by wildcards. New lines in each such file are saved to the local machine to the file with the same name. (This is a task of real-time log files collecting.) I could use ssh + tail -f, of course, but it is not very robust: if a monitoring process dies and then restarts, some data from remote files may be lost (because tail -f does not save the position at which it is finished before). I may write this tool manually, but before - I'd like to know if such tool already exists or not.

    Read the article

  • Unable to log into samba server

    - by Paddington
    I am unable to log into a samba server (running on fedora core 6) as it prompts for a username and password when I try to connect to the mapped drives from my windows 7 machine. I decided to reset the password using the command smbpassword paddy and when I list users using check the pdbedit -L -v I see that the password was updated at the time I made this change. However, I am still unable to log in. The log file in /var/log/samba/log.paddy shows: [2012/10/11 09:55:54.605923, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED [2012/10/11 09:55:54.606635, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED How can I resolve this so that I can log in?

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • logrotate deletes all maillogs older than one day

    - by shadyabhi
    I see only two files maillog and maillog.1 in /var/log. grepping for maillog in logrotate.d directory gives three files that have a mention of maillog. syslog /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { #/var/log/messages /var/log/secure /var/log/spooler /var/log/boot.log /var/log/cron { daily sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } syslog-ng /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/kern.log /var/log/kern { sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } and maillog. /var/log/maillog { daily compress # rotate 365 rotate 14 sharedscripts postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true endscript } I am new to logrotate so may be I am missing something obvious. What can be the issue? The setup was already done when I started managing the server so I don't also know as do why do I have 3 mentions for maillog in logrotate.

    Read the article

  • Hierarchy based aggregation

    - by Ganapathy Subramaniam
    I have a hierarchy table in SQL Server 2005 which contains employees - managers - department - location - state. Sample table for hierarchy table: ID Name ParentID Type 1 PA NULL 0 (group) 2 Pittsburgh 1 1 (subgroup) 3 Accounts 2 1 4 Alex 3 2 (employee) 5 Robin 3 2 6 HR 2 1 7 Robert 6 2 Second one is fact table which contains employee salary details ID and Salary. Sample data for fact table: ID Salary 4 6000 5 5000 7 4000 Is there any good to way to display the hierarchy from hierarchy table with aggregated sum of salary based on employees. Expected result is like Name Salary PA 15000 (Pittsburgh + others(if any)) Pittusburgh 15000 (Accounts + HR) Accounts 11000 (Alex + Robin) Alex 6000 (direct values) Robin 5000 HR 4000 Robert 4000 In my production environment, hierarchy table may contain 23000+ rows and fact table may contain 300,000+ rows. So, I thought of providing any level of groupid to the query to retrieve just its children and its corresponding aggregated value. Any better solution?

    Read the article

  • Django aggregation query on related one-to-many objects

    - by parxier
    Here is my simplified model: class Item(models.Model): pass class TrackingPoint(models.Model): item = models.ForeignKey(Item) created = models.DateField() data = models.IntegerField() In many parts of my application I need to retrieve a set of Item's and annotate each item with data field from latest TrackingPoint from each item ordered by created field. For example, instance i1 of class Item has 3 TrackingPoint's: tp1 = TrackingPoint(item=i1, created=date(2010,5,15), data=23) tp2 = TrackingPoint(item=i1, created=date(2010,5,14), data=21) tp3 = TrackingPoint(item=i1, created=date(2010,5,12), data=120) I need a query to retrieve i1 instance annotated with tp1.data field value as tp1 is the latest tracking point ordered by created field. That query should also return Item's that don't have any TrackingPoint's at all. If possible I prefer not to use QuerySet's extra method to do this. That's what I tried so far... and failed :( Item.objects.annotate(max_created=Max('trackingpoint__created'), data=Avg('trackingpoint__data')).filter(trackingpoint__created=F('max_created')) Any ideas?

    Read the article

  • How to completely disable apache access log? [closed]

    - by Miljenko Barbir
    I'm running WAMP server on Windows Server 2003, Apache 2.2, and I would like to completely disable writing into the access log. It would be neat if I could do the following, but I'm on Windows: CustomLog "|/dev/null" common All I get in the error log is "piped log program '/dev/null' failed unexpectedly", although I kinda expected this... Is there a Windows alternative to this or any other way to just disable writing the access log?

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • Weird stuff in in my /var/log/auth.log

    - by xXx
    I just check my logs on my deed serv, i spotted some weird log in the auth.log : Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session closed for user user Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session closed for user alain Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session closed for user user Looks like somebody try to log - and suceed ? - but delog instantly ? I got the same log for hours now... Do you know what happens ? N.B : it's a 10.04 ubuntu server

    Read the article

  • Refreshing user's group membership in active directory without log-off/log-on

    - by Serge
    So, when user logs in to their workstation, they receive SIDs of groups they are members of, and this is used for the length of the session, until logging off. Is there a way to refresh membership SIDs information without actually having to log off and log on again? I've added myself to a group, but can't log off without interrupting running process that requires these permissions. Don't want to have to go through these steps again...

    Read the article

  • New event log nowhere to be found after creating in PowerShell

    - by Mega Matt
    Through PowerShell, I am attempting to create a new event log and write a test entry to it, but it is not showing up the Event Viewer. This is the command I'm using to create a new event log: new-eventlog -logname TestLog -source TestLog And to write a new event to it: write-eventlog TestLog -source TestLog -eventid 12345 -message "Test message" After running the first command, there is no "TestLog" log in the event viewer anywhere, and I would expect it to show up in the Applications and Services Logs section. After running the second command, same result. However, I am seeing a registry key for the log at HKLM\SYSTEM\services\eventlog\TestLog. Just not seeing anything in the event viewer. So, 2 questions: When should I be seeing the event log? After it gets created or after I write the first event to it? And, more importantly, why am I not seeing it at all? I'm using Windows Server 2008R2, and am logged in and running the PS as an administrator. Thanks.

    Read the article

  • Very large log files in IIS 7.x

    - by Neal
    Hello, I had a site stop working today and when I RDP'd into the server I saw a warning about low disk space. The first thing I checked was the inetpub folder where the log files are stored and sure enough it was huge, 40 GB huge. I do clean the files monthly but what is causing a day's worth of logging on a medium activity site (www.vbdotnetforums.com) to create 300-500 MB log files? I do have everything being logged so my SmarterStats software gives me the most info, but are there specific things I should/can turn off that is causing the most growth in these log files? Also, sure would be nice if Microsoft someday had some sort of log file management such as deleting log files after they exceed a certain size (total), X days, etc. We all have to come up with some solution to delete the old ones manually. Thanks Neal

    Read the article

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • Thoughts on Apache log file sizes?

    - by Nathan Long
    Do you place any limits on the size of Apache log files - access.log and error.log? Specifically, can you give: Reasons to limit log file sizes Disk space Any other? Reasons NOT to limit log file sizes Research into performance issues or security breaches Any other? Methods of doing so Cron job that periodically deletes the file, or the first N lines? Any other? Anything you might salvage before deleting For example, grep out how many times a file was downloaded before deleting the access logs I'd like get the thoughts of experienced sysadmins before I do anything. (Marking as community wiki since this may be a matter of opinion.)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >