Search Results

Search found 31501 results on 1261 pages for 'event log'.

Page 26/1261 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • uWSGI log file...permission denied to read file

    - by bkev
    I have a server running Django/Nginx/uWSGI with uWSGI in emperor mode, and the error log for it (the vassal-level error log, not the emperor-level log) has a continual permissions error every time it spawns a new worker, like so: Tue Jun 26 19:34:55 2012 - Respawned uWSGI worker 2 (new pid: 9334) Error opening file for reading: Permission denied Problem is, I don't know what file it's having trouble opening; it's not the log file, obviously, since I'm looking at it and it's writing to that without issue. Any way to find out? I'm running the apt-get version of uWSGI 1.0.3-debian through Upstart on Ubuntu 12.04. The site is working successfully, aside from what seems like a memory leak...hence my looking at the log file. My Upstart conf file description "uWSGI" start on runlevel [2345] stop on runlevel [06] respawn env UWSGI=/usr/bin/uwsgi env LOGTO=/var/log/uwsgi/emperor.log exec $UWSGI \ --master \ --emperor /etc/uwsgi/vassals \ --die-on-term \ --auto-procname \ --no-orphans \ --logto $LOGTO \ --logdate My Vassal ini file: [uwsgi] # Variables base = /srv/env/mysiteenv # Generic Config uid = uwsgi gid = uwsgi socket = 127.0.0.1:5050 master = true processes = 2 reload-on-as = 128 harakiri = 60 harakiri-verbose = true auto-procname = true plugins = http,python cache = 2000 home = %(base) pythonpath = %(base)/mysite module = wsgi logto = /srv/log/mysite/uwsgi_error.log logdate = true

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • How do I permanently delete /var/log/lastlog?

    - by GregB
    My /var/log/lastlog file is huge. I know it's really only a few kilobytes, but tar isn't smart enough to know that, so when I image a virtual machine, my restore fails because it thinks I'm trying to load more data than I have capacity on my disk. I want to delete /var/log/lastlog and stop any and all logging to the file. I'm aware of the security implications. This logging needs to stop to preserve my backup strategy. I've made a change to /etc/pam.d/login which I was told would disable logging to /var/log/lastlog, but it does not appear to work as /var/log/lastlog keeps growing. # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) #session optional pam_lastlog.so Any ideas? EDIT For anyone interested, I use Centrify Express to authenticate my users via LDAP. Centrify Express is "free", but one of the drawbacks is that I can't manage user UIDs via LDAP, so they are given a dynamic UID when they login to a server. Centrify picks some crazy high UID values (so they don't conflict with local users on the server, presumably). /var/log/lastlog is indexed by UID, and grows to accommodate the largest UID on the system. This means that when a Centrify user logs in, they get a UID in the upper-end of the UID range, which causes lastlog to allocate an obscene amount of space, according to the file system. ~$ ll /var/log/lastlog -rw-rw-r-- 1 root root 291487675780 Apr 10 16:37 /var/log/lastlog ~$ du -h /var/log/lastlog 20K /var/log/lastlog More Into --- Sparse Files

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • How can I display and log PHP errors on IIS7?

    - by Ben
    We're running PHP 5.2.5 on an IIS 7 Server and we're having problems making PHP errors visible... At the moment whenever we have a PHP error the server sends back a 500 error with the message "The page cannot be displayed because an internal server error has occurred." This might be a good setting for production websites but it's rather annoying on a development server... ;-) I have tried configuring php.ini to display errors to the screen as well as log them to a specific folder but it seems that the Server catches all errors before and prevents and handling by PHP... Does someone know what we have to do to make IIS display PHP errors on screen? Any links, tipps or tutorials on the subject would be appreciated!

    Read the article

  • How to rotate log files when running pyapns and twistd

    - by Henrik
    I use pyapns (Iphone push server) which in turn uses twisted (twistd daemon). twistd daemon produces twistd.log files. It rotates them to twistd.log.1, twistd.log.2 and so on when twistd.log reaches 1MB but it doesn't use logrotate so I guess it's built in. The problem is that this continues forever and that old log files are never deleted. This eventually fills my disk. I've tried to use logrotate or similar to rotate the logs, but then I would need to run logrotate very very often since I need to rotate BEFORE twistd.log reaches 1MB. This could happen within a second for all I know depending on how much log is produced. So how could I logrotate without hacking pyapns/twistd scripts?

    Read the article

  • Is there a convenient way to manually copy the Log Shipping *.trn files from one SQL 2008 server to

    - by Rick
    We have a remote SQL 2008 server (ServerB) that needs to keep a warm (15 minute interval is OK) copy of production data (ServerA). ServerA is also SQL 2008. Log Shipping looks like it will do the job. We can only get to the destination ServerB with remote desktop. Is there a way to set this up when we can't get to both servers from one Management Studio? We want to be able to temporarily (until a VPN is setup between our network and the ServerB network) manually export a small .trn file, copy it via remote desktop to ServerB and then manually import those transactions from the .trn file. My supervisor says he saw a post saying this is possible. We were just trying to avoid doing a full database backup and copying that every time. Thanks in advance for any suggestions on this.

    Read the article

  • SQL Server 2008 Log-shipping: Without a UNC drive: how?

    - by samsmith
    My real question here is... is there a tool I can use? (E.g. I have a lot to do, and would prefer not to script it all up myself!) Anyone using the redgate (hmmm, they had a tool for this, but I do not see it on their web site now...) I have a primary web app at rackspace. Am setting up a backup copy of the app in another data center. I want to use SQL log replication to sync the db. Using SQL Server Web Edition. TIA for suggestions and insight!

    Read the article

  • Nothing is written in php5-fpm.log

    - by jaypabs
    I have two servers which is Ubuntu 12.04 and Ubuntu 14.04. When I use Ubuntu 14.04 in my new server and enabled the php-fpm log file found under /etc/php5/fpm/php-fpm.conf that reads as follows: error_log = /var/log/php5-fpm.log I noticed that most of the log that I found in Ubuntu 12.04 is not written in 14.04. For example, if I restart php5-fpm in my Ubuntu 12.04, a restart log is being written, however, this does not happen in 14.04. Another log which I missed in 14.04 are the following: [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 118098 exited with code 0 after 12983.480191 seconds from start [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 147653 started [23-Aug-2014 17:27:31] WARNING: [pool web8] child 76743, script '/var/www/mysite.com/web/wp-comments-post.php' (request: "POST /wp-comments-post.php") executing too slow (12.923022 sec), logging I really wanted to have this kind of log so I will know the length of time a slow script has executed. Does anyone know if there are other settings in Ubuntu 14.04 that I need to change in addition to /etc/php5/fpm/php-fpm.conf?

    Read the article

  • Windows log file monitor that supports custom events (eg. sending an email when it detects the string "ERROR")

    - by ilitirit
    I know this question has been asked several times before but I can't seem to find a solution for my requirements. I currently use BareTail, which works wonderfully except that it doesn't support custom events besides line highlighting. I'm also trying TailForWin32. It has a SMTP plugin but it seems to be in beta status, and the highlighting seems limited. It also doesn't handle rolling log files very well (a blocking dialog box pops up, whereas BareTail just rolls over naturally). All I really need is something like BareTail that supports custom events. First prize would be a tool with a plugin-based architecture so I can use my own messaging plugins, but anything that supports SMTP mail would be fine as well.

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • How to specify the Event Log Source where ASP.NET writes unhandled exceptions?

    - by Knagis
    By default ASP.NET writes any unhandled exception to the default ASP.NET X.Y.Z.0 event log source. Is it possible to specify either configuration that the events and exceptions for a particular application has to be logged in a specific event log Source? The reason is that I would want all issues directly related to my application to be stored in a separate event log category that can then be filtered against.

    Read the article

  • How can I dynamically inject code to event handlers in Delphi?

    - by mjustin
    For debugging / performance tests I would like to dynamically add logging code to all event handlers of components of a given type. For example, for all Dataset components located ona TDatamodule, I would like to add some code for the BeforeOpen and the AfterOpen event to store the start and end time and send a line to a logger with the elapsed time in the AfterOpen event. I would prefer to do this dynamically (no component subclassing), so that I can add this to all existing datamodules and forms with minimal effort only when needed. Iterating all components and filtering by their type is easy, but for the components which already have event handlers assigned, I need a way to store the existing event handlers, and assign a new modified event handler which first does the logging and then will invoke the original code which was already present. Is there a design pattern which can be applied, or even some example code which shows how to implement this in Delphi?

    Read the article

  • How can I dynamically inject code into event handlers in Delphi?

    - by mjustin
    For debugging / performance tests I would like to dynamically add logging code to all event handlers of components of a given type. For example, for all Dataset components located ona TDatamodule, I would like to add some code for the BeforeOpen and the AfterOpen event to store the start and end time and send a line to a logger with the elapsed time in the AfterOpen event. I would prefer to do this dynamically (no component subclassing), so that I can add this to all existing datamodules and forms with minimal effort only when needed. Iterating all components and filtering by their type is easy, but for the components which already have event handlers assigned, I need a way to store the existing event handlers, and assign a new modified event handler which first does the logging and then will invoke the original code which was already present. Is there a design pattern which can be applied, or even some example code which shows how to implement this in Delphi?

    Read the article

  • What server log file(with Centos5) must I check to locate specific IP activity?

    - by makmour
    Hi! I leasing a dedi server under Centos5, just recently I ve setup a blog website that grabs feeds from other news websites and presents them on my blog. Some days after I see high server loads that cant be coming just from the cron jobs that I run every now and then to grab feeds from other websites for my blog website. Thats why I m trying to pico some log files in order to have a better look the whole problem. These last days I noticed(from statscounter service) the same IP address visiting my blog website many times per day so I want to find out what us trying to do. I tried looking in all /var/log log files and httpd too but no luck. Is there any other log file I should open or any other procedure to track this IP acivity on server?

    Read the article

  • When designing an event, is it a good idea to prevent listeners from being added twice?

    - by Matt
    I am creating an event-based API where a user can subscribe to an event by adding listener objects (as is common in Java or C#). When the event is raised, all subscribed listeners are invoked with the event information. I initially decided to prevent adding an event listener more than once. If a listener is added that already exists in the listener collection, it is not added again. However, after thinking about it some more, it doesn't seem that most event-based structures actually prevent this. Was my initial instinct wrong? I'm not sure which way to go here. I guess I thought that preventing addition of an existing listener would help to avoid a common programming error. Then again, it could also hide a bug that would lead to code being run multiple times when it shouldn't.

    Read the article

  • Is it possible to log the first line of the response in apache?

    - by Jeppe Mariager
    Hey, We have an Tomcat server where we're trying to log the HTTP version which the response is sent with. We've seen a few times that it seems to be HTTP/0.9, which kills the content (not supported I guess?). We would like to get some stats on this by using the access log in apache. However, since the header line for this isn't prefixed by anything, we cannot use the %{xxx}o logging. Is there a way to get this? An example: Response is: HTTP/1.1 503 This application is not currently available Server: Apache-Coyote/1.1 Content-Type: text/html;charset=utf-8 Content-Length: 1090 Date: Wed, 12 May 2010 12:53:16 GMT Connection: close And we'd like the catch HTTP/1.1 (alternatively, HTTP/1.1 503 This application is not currently available. Is this possible? We do not have access to the application being served, so we need to do this either as a Java filter, or in the tomcat access log - Preferably in the access log.

    Read the article

  • Why isn't the pathspec magic :(exclude) excluding the files I specify from git log's output?

    - by Jubobs
    This is a follow-up to Ignore files in git log -p and is also related to Making 'git log' ignore changes for certain paths. I'm using Git 1.9.2. I'm trying to use the pathspec magic :(exclude) to specify that some patches should not be shown in the output of git log -p. However, patches that I want to exclude still show up in the output. Here is minimal working example that reproduces the situation: cd ~/Desktop mkdir test_exclude cd test_exclude git init mkdir testdir echo "my first cpp file" >testdir/test1.cpp echo "my first xml file" >testdir/test2.xml git add testdir/ git commit -m "added two test files" Now I want to show all patches in my history expect those corresponding to XML files in the testdir folder. Therefore, following VonC's answer, I run git log --patch -- . ":(exclude)testdir/*.xml" but the patch for my testdir/test2.xml file still shows up in the output: commit 37767da1ad4ad5a5c902dfa0c9b95351e8a3b0d9 Author: xxxxxxxxxxxxxxxxxxxxxxxxx Date: Mon Aug 18 12:23:56 2014 +0100 added two test files diff --git a/testdir/test1.cpp b/testdir/test1.cpp new file mode 100644 index 0000000..3a721aa --- /dev/null +++ b/testdir/test1.cpp @@ -0,0 +1 @@ +my first cpp file diff --git a/testdir/test2.xml b/testdir/test2.xml new file mode 100644 index 0000000..8b7ce86 --- /dev/null +++ b/testdir/test2.xml @@ -0,0 +1 @@ +my first xml file What am I doing wrong? What should I do to tell git log -p not to show the patch associated with all XML files in my testdir folder?

    Read the article

  • HTG Explains: Why You Shouldn’t Log Into Your Linux System As Root

    - by Chris Hoffman
    On Linux, the Root user is equivalent to the Administrator user on Windows. However, while Windows has long had a culture of average users logging in as Administrator, you shouldn’t log in as root on Linux. Microsoft tried to improve Windows security practices with UAC – you shouldn’t log in as root on Linux for the same reason you shouldn’t disable UAC on Windows. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • An XEvent a Day (23 of 31) – How it Works – Multiple Transaction Log Files

    - by Jonathan Kehayias
    While working on yesterday’s blog post The Future – fn_dblog() No More? Tracking Transaction Log Activity in Denali I did a quick Google search to find a specific blog post by Paul Randal to use it as a reference, and in the results returned another blog post titled, Investigating Multiple Transaction Log Files in SQL Server caught my eye so I opened it in a new tab in IE and went about finishing the blog post.  It probably wouldn’t have gotten my attention if it hadn’t been on the SqlServerPedia...(read more)

    Read the article

  • Log Growing Pains

    Understanding the transaction log seems to be a very difficult concept fro mos DBAs to grasp. Jason Brimhall brings us a new article that helps to troubleshoot the cause of log growths.

    Read the article

  • Free eBook - Control Your Transaction Log so it Doesn't Control You

    Download your free copy of SQL Server Transaction Log Management and see why understanding how log files work can make all the difference in a crisis. Want to work faster with SQL Server?If you want to work faster try out the SQL Toolbelt. "The SQL Toolbelt provides tools that database developers as well as DBAs should not live without." William Van Orden. Download the SQL Toolbelt here.

    Read the article

  • Scanning the Error Log with PowerShell

    - by AllenMWhite
    One of the most important things you can do as a DBA is to keep tabs on the errors reported in the error log, but there's a lot of information there and sometimes it's hard to find the 'good stuff'. You can open the errorlog file directly in a text editor and search for errors but that gets tedious, and string searches generally return just the lines with the error message numbers, and in the error log the real information you want is in the line after that. PowerShell 2.0 introduced a new cmdlet...(read more)

    Read the article

  • Could not log-in properly but shows no error in joomla

    - by saeha
    I added some fields in the user registration in joomla including the image field which is blob type, all data is saving properly and i could also display the image but I could not log-in properly using the account with an uploaded image in the database and shows no error. And when I log-in as administrator in the backend, i saw this account as logged in. Any idea if what causes this problem? please help? Thank you.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >