Search Results

Search found 20299 results on 812 pages for 'git log'.

Page 78/812 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • hardening a server: disallow password-login for sudoers and log unusual ips

    - by Fabian Zeindl
    Two question regarding sudo-login into an ubuntu-system (debian tips welcome as well): Is it possible to require sudoers on my box to only login with publickey-authentication? Is it possible to log which ip sudoers log in from and check that for "unusual activity" or take actions? I'm thinking about temporarily removing sudo-rights if sudoers don't log in from whitelisted IPs. Or is that too risky to be exploited?

    Read the article

  • Can the mysql slow query log show milliseconds?

    - by Chase Seibert
    The mysql slow query log shows query time in whole integers. # Query_time: 0 Lock_time: 0 Rows_sent: 177 Rows_examined: 177 SELECT ... # Query_time: 1 Lock_time: 0 Rows_sent: 56 Rows_examined: 208 SELECT ... There was a microsecond patch to allow mysql to be configured to log queries that take longer than X microseconds to run. But is there a way to have the log output the query time in either milliseconds or microseconds?

    Read the article

  • Benefits of log rotation

    - by Manfred Moser
    I have been using logrotation for years and never thought too much of it being a problem until I came across a question on stackoverflow (http://stackoverflow.com/questions/1508734/disable-java-log-rotation/) where someone wants to disable log rotation. To me with experience in having build server and even production servers cleaned up manually because logs are not rotated and discs are running out and suddenly machines come to a halt that all seems crazy, but it occurred to me that maybe it is not so obvious after all. So what are the benefits of log rotation? And what are the drawbacks (e.g. more difficult to debug/analyze maybe)? What tools do you find useful for working with rotated log files? Splunk I assume, but what else?

    Read the article

  • Squid Log Rotation and Sarg

    - by beakersoft
    We have just setup squid as our proxy, and i was going to use Sarg to analyze the log files. I had initially set the Squid logs to rotate everyday so they dont get huge. The problem is i cant see an option in the squid config to read a folder full of squid log files (say *.log). Is there an easy way to do this or am i going to have to write a bash script or something to process them all into one before i get squid to read it? Cheers Luke

    Read the article

  • Boot log for Windows XP

    - by JasCav
    Where can I find a step-by-step boot log of my Windows XP machine? I'm looking for something akin to the boot log you would get in Linux (with what is running at what times, how long it is running, etc). I am specifically interested in the what is happening after I get out of initial boot phase (AKA, the Windows XP logo goes away and I move to the generic blue background, and as I log in as a user onto the machine).

    Read the article

  • Can't remove zfs log device from pool

    - by netmano
    I run a FreeBSD 9.0 server with ZFS pool version 28 and ZFS version 5. I had two pools with a log on ssd's two partitions. These pools was created on FreeBSD 8.2 with ZFS pool version 15, and ZFS version 4. After I upgraded to the new zfs version, I tried to remove the SSD log device from both pools both command was successful (no error message). One of the pools was the log removed, but the other still there, I down the server removed the ssd physically, and hoped it will be forgot by the zpool. The zpool became degraded as ssd was missing. I tied to remove again. No error message, but the log device entry still there. After it, to became the pool online again, I created a file on the root UFS partition and replaced the missing to device to this file. It was successful, the pool again online. However I can't remove the log device from the pool. Where can I have to look for error messages? (in dmesg there is nothing about it, also the zfs remove doesn't have any error message, it's seem like it was successful.

    Read the article

  • Ngingx max worker_connections and access log

    - by MotoTribe
    I'm troubleshoot an issue with my site. I'm seeing in the ngingx-error.log that the max worker_connection limit has been reached when the site went down. I'm not seeing an increase of requests during that time in the ngingx-access.log. Does that mean the mysql database had a bottleneck at that time that caused the requests to queue up? Or would it not log any requests that where made after the max worker_connection limit has been reached?

    Read the article

  • place php errors in log file

    - by Gatura
    I am running mac 10.6.4 on an iMac and am using it as a developer server. I have Apache and Entropy php5 installed, when i write my applications, some pages wont run when php has errors, however these are not recorded on a log file, I created one php_errors.log and entered the following on the php.ini file error_log = /usr/local/php5/logs/php_errors.log However errors are not written to this file and i have log_errors = true What could be the problem

    Read the article

  • How to log nginx vhost bandwidth?

    - by bwizzy
    I'm looking for a way to be able to track the bandwidth of multiple vhosts on an nginx web server. I'm guessing there is a way that I can set up the log files to output this information and then I can write a script to parse through the log files and add up the file sizes. If that is the case does anyone know the correct log format, and if there is already a script out there that does this?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Count requests from access log for the last 7 days

    - by RoboForm
    I would like to parse an access log file and have returned the amount of requests, for the last 7 days. I have this command cut -d'"' -f3 /var/log/apache/access.log | cut -d' ' -f2 | sort | uniq -c | sort -rg Unfortunately, this command returns the amount of requests since the creation of the file and sorts it into HTTP-code categories. I would like just a number, no categories and only for the last 7 days. Thanks.

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Error while taking Transaction log backup

    - by Divya Kapoor
    Hello, I have sceduled a Transaction log back up schedule. But the backup is not happening. The error in the logs is this: Transaction Log Backup.Subplan_1,Error,0,ARCOTDB1\ARCOT_DB_INST1,Transaction Log Backup.Subplan_1,(Job outcome),,The job failed. Unable to determine if the owner (ARCOT-DB1\Superuser) of job Transaction Log Backup.Subplan_1 has server access (reason: Could not obtain information about Windows NT group /user 'ARCOT-DB1\Superuser'<c/> error code 0x534. [SQLSTATE 42000] (Error 15404)) Please help!

    Read the article

  • GitHub - commit local changes in local branch to remote branch

    - by user62046
    I use Git Shell in Windows 7, working in a branch named Save-Rotation. Then I used git push origin Save-Rotation to commit the changes to remote. The result is posted at the end. It seems good. But when I went to my repository in GitHub site, which is https://github.com/chiapas/sumatrapdf/tree/Save-Rotation I can't see any change in the repository tree or commit tree. How can I know if the commit (to remote) is successful, and why the repository page is not updated? Here is the result in command-line C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation]> git push origin Save-R otation Counting objects: 167, done. Delta compression using up to 8 threads. Compressing objects: 100% (18/18), done. Writing objects: 100% (119/119), 27.43 KiB, done. Total 119 (delta 101), reused 119 (delta 101) To https://github.com/chiapas/sumatrapdf * [new branch] Save-Rotation -> Save-Rotation C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]> git push o rigin Save-Rotation Everything up-to-date C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]>

    Read the article

  • SSH does not allow the use of a key with group readable permissions

    - by scjr
    I have a development git server that deploys to a live server when the live branch is pushed to. Every user has their own login and therefore the post-receive hook which does the live deployment is run under their own user. Because I don't want to have to maintain the users public keys as authorized keys on the remote live server I have made up a set of keys that 'belong's to the git system to add to remote live servers (In the post-receive hook I am using $GIT_SSH to set the private key with the -i option). My problem is that because of all the users might want to deploy to live, the git system's private key has to be at least group readable and SSH really doesn't like this. Here's a sample of the error: XXXX@XXXX /srv/git/identity % ssh -i id_rsa XXXXX@XXXXX @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0640 for 'id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: id_rsa I've looked around expecting to find something in the way of forcing ssh to just go through with the connection but I've found nothing but people blindly saying that you just shouldn't allow access to anything but a single user.

    Read the article

  • Tarballing without git metadata

    - by zaf
    My source tree contains several directories which are using git source control and I need to tarball the whole tree excluding any references to the git metadata or custom log files. I thought I'd have a go using a combo of find/egrep/xargs/tar but somehow the tar file contains the .git directories and the *.log files. This is what I have: find -type f . | egrep -v '\.git|\.log' | xargs tar rvf ~/app.tar Can someone explain my misunderstanding here? Why is tar processing the files that find and egrep are filtering? I'm open to other techniques as well.

    Read the article

  • Script to gather all the files ending in .log and create a tar.gz file.

    - by Oscar Reyes
    I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them. find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \; The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file. For instance I have: ./product/install/install.log ./product/execution/daily.log ./other/conf/blah.log And I end up with: /tmp/allLogs/install.log /tmp/allLogs/daily.log /tmp/allLogs/blah.log And I would like to have: /tmp/allLogs/product/install/install.log /tmp/allLogs/product/execution/daily.log /tmp/allLogs/other/conf/blah.log

    Read the article

  • How does Windows 7 determine that a system was not shut doen correctly (Kernel-Pwer Event ID 41)

    - by Erik
    I have a strange situation that Windows 7 thinks it was not shut down correctly, and gives me a warning in the system event log like this: http://support.microsoft.com/kb/2028504/de What the KB-article does not tell is how Windows actually determines that situation. Does it parse its own system event log after reboot? Or where does it get that information from? I am currently investigating an issue where I believe the system fails to write the system event log correctly (it stops having entries, although other logs like the application event log still have entries), and after a reboot, Windows thinks it has not been shut doen correctly. Does anybody have any experience with this? ANd can you confrim that Windows determines the previous correct system shutdown by parsing its own system event log on startup?

    Read the article

  • SQL Server 2014 – delayed transaction durability

    - by Michael Zilberstein
    As I’m downloading SQL Server 2014 CTP2 at this very moment, I’ve noticed new fascinating feature that hadn’t been announced in CTP1 : delayed transaction durability . It means that if your system is heavy on writes and on another hand you can tolerate data loss on some rare occasions – you can consider declaring transaction as DELAYED_DURABILITY = ON . In this case transaction would be committed when log is written to some buffer in memory – not to disk as usual. This way transactions can become...(read more)

    Read the article

  • GUI for watching logs (tail and grep)

    - by Grzegorz Oledzki
    Could you recommend a GUI application with powerful log watching capabilities? Generally it would work as tail -f in GUI, but on top of that following features would be very useful: filtering out some lines based on (regular) expressions coloring some lines based on (regular) expressions interactive search saveable configuration easily applicable to different files notifications based on (regular) expressions A similar tool on Windows is BareTail and its paid version - BareTailPro

    Read the article

  • GUI for watching logs (tail and grep)

    - by Grzegorz Oledzki
    Could you recommend a GUI application with powerful log watching capabilities? Generally it would work as tail -f in GUI, but on top of that following features would be very useful: filtering out some lines based on (regular) expressions coloring some lines based on (regular) expressions interactive search saveable configuration easily applicable to different files notifications based on (regular) expressions A similar tool on Windows is BareTail and its paid version - BareTailPro

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >