Search Results

Search found 31501 results on 1261 pages for 'event log'.

Page 54/1261 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • A specific user is unable to log in to vsftpd

    - by HackToHell
    I am setting up a new user let his name be ftpguy. He has access to only one directory /var/www/xxx. I have already chowned the directory so that he has write and read privileges. The user is also unable to login via ssh as I have disabled that by changing his shell to /sbin/nologin. Also, in vsftpd config, I have enabled the chroot_local_user. Now whenever I log in from ftp, i get an auth error. Connect socket #1008 to xxxxxxxx, port 21... 220 Welcome to blah FTP service. USER ftpguy 331 Please specify the password. PASS ********** 530 Login incorrect. I changed the password to something different several times, using the passwd command, nothing happens, i still the above error. However I am able to log in with my ssh creditals to my ftp server without any problems.(I do not use a key).

    Read the article

  • Disk / system configuration for log collection / syslog server

    - by Konrads
    I am looking into building a syslog / logging infrastructure and am pondering about some architecture best practices. Essentially, I see that a syslog system needs to support two conflicting workloads: log collection. Potentially massive streams of data need to be written quickly to disks and indexed. log querying. logs will be queried by both fixed fields such as date and source as well as text search. What is the best disk/system setup assuming I'd like to keep it to a single server for now? Should I use SSDs or ramdisk to off-load some processing? some disks in stripe and some in raid5? I am particularly eyeing Graylog2 with ElasticSearch/MongoDB

    Read the article

  • Php 5.3.3. Access log

    - by irolla
    Hi I'm using php-fpm. In 5.3.2 when I'm opening phpinfo page in access log I get: ip - - [26/Aug/2010:16:35:32 +0400] "GET /phpinfo.php HTTP/1.1" 200 13322 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" But in 5.3.3 I'm getting: ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php HTTP/1.1" 200 11891 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42 HTTP/1.1" 200 2536 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=SUHO8567F54-D428-14d2-A769-00DA302A5F18 HTTP/1.1" 200 2825 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" ip - - [26/Aug/2010:16:30:30 +0400] "GET /phpinfo.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42 HTTP/1.1" 200 2158 "http://site.com/phpinfo.php" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5" Why there is 4 lines insted of 1? And what means "?=PHPE...". Is it PHP sessions? My php5.3.3 fpm config: [global] pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log log_level = notice [pool_0] listen = 127.0.0.1:9000 listen.backlog = -1 listen.allowed_clients = 127.0.0.1 user = www-data group = www-data pm = dynamic pm.max_children = 50 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 pm.status_path = /pool_0/status rlimit_files = 1024 rlimit_core = 0 catch_workers_output = yes php_admin_flag[register_globals] = true php_admin_value[error_reporting] = E_ALL & ~E_DEPRECATED php_admin_value[max_execution_time] = 15 php_admin_flag[short_open_tag] = true php_admin_flag[display_errors] = false

    Read the article

  • Stop Munin messages from /var/log/syslog

    - by Sparsh Gupta
    Hello I am using munin on a system which is adding a log entry in syslog everytime the munin-node cron job executes. It is not an issue but it sometimes makes other errors spotting difficult. There are entries like Feb 28 07:05:01 li235-57 CRON[2634]: (root) CMD (if [ -x /etc/munin/plugins/apt_all ]; then /etc/munin/plugins/apt_all update 7200 12 >/dev/null; elif [ -x /etc/munin/plugins/apt ]; then /etc/munin/plugins/apt update 7200 12 >/dev/null; fi) every 5 minutes and I was wondering how can I stop the messages going into syslog. For munin specific errors I anyways have to keep an eye on /var/log/munin/* Thanks Sparsh

    Read the article

  • AWStats log format for tomcat access logs which has X-Forwarded-For

    - by Nix
    What should be the AWStats log format for below tomcat access logs ? I tried these formats but the external IP addresses are not coming into AWStats reports. LogFormat="%host %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %referer %other %other" LogFormat="%other %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %host_proxy" tomcat valve settings: pattern="%h %l %{USER_ID}s %t &quot;%r&quot; %s %b &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot; &quot;X-Forwarded-For=%{X-Forwarded-For}i&quot; &quot;JSESSIONID=%{JSESSIONID}c&quot; %D" Log entry: 127.0.0.1 - - [04/Nov/2013:13:39:55 +0000] "GET / HTTP/1.1" 200 12345 "https://www.google.com/url?some_url" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36" "X-Forwarded-For=real_ip, proxy_server_internal_ip" "JSESSIONID=-" 12345

    Read the article

  • sudo: cd: command not found when trying to get to /var/log/apache2

    - by Piers
    I'm running Ubuntu 10.04 and am having issues getting to the log files in /var/log/apache2 I can cd to most other places (I haven't tried every single file, obviously) but when I try to get to the above directory, I get the error message sudo: cd: command not found ... I've just tried something else and I can't cd when used in conjunction with sudo. I can use sudo when doing things like apt-get but it seems I can't change directory when using sudo. I haven't been on this server for a while but I know I used to be able to do this.

    Read the article

  • create log for an encrypted tar

    - by magiza83
    I want to create an encrypted tar but also I want to have a log of what tar has compressed, I'm using the following command: tar -cvvf - --files-from=/root/backup.cfg | openssl des3 -salt -k backuppass | dd of=/root/tmp/back.encrypted But I need to have a log of tar's stdout. I don't know how to get it, because If I use "" in tar command openssl result is not correct. I've also checked tar manual hoping to find some option to write stdout to a file, but I have found nothing. any help? thanks & Regards.

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • log execution of certain commands on linux

    - by jlsksr
    I have to maintain a system (debian) on which several users are allowed to install programs - so I would like to log, for example, if anyone executes "apt-get install" or "apt-get purge", so I can keep track of manually installed packages.. I'm looking for a general way to achieve this; it's not just APT, but several programs/scripts etc. Any ideas? /edit a google-search with few different keywords brought up this: http://serverfault.com/questions/201221/how-to-log-every-linux-command-to-a-logserver http://stackoverflow.com/questions/15698590/how-to-capture-all-the-commands-typed-in-unix-linux-by-any-user http://sourceforge.net/projects/rootsh/

    Read the article

  • Server unresponsive, messages shown on console but not in log files

    - by raistlin majere
    I'm using Ubuntu Server 10.04.4, and once in a while the server hangs (once a month) and is totally unresponsive. The tty is flooded with messages like these. The problem is that these messages are not in my log files after reboot. How to log these messages so that I can analyze them later? In the current logs I can't see anything that would tell me why this is happening. I would also appreciate if anybody can tell from those messages what's going on. This server is a guest virtual machine. The host server is also Ubuntu server 10.04 with KVM/QEMU.

    Read the article

  • Log into AD account through Command Line

    - by CranialPain
    Our SBS2003 server likes to lose connection every so often. This appears to 'kick' everyone out, so that no-one can access the server or its shared folders without a log off log in. It usually brings up an error message stating that Windows 7 (on the client machines) cannot find the server, even though its ping-able. Is there a way to login through the command line so I can just write a batch file and have the users double-click it and enter their credentials instead of closing down programs and logging out/in over and over?

    Read the article

  • Windows XP, have to use ctrl+alt+delete to log on as local administrator

    - by wickedj
    Hey, I have a weird issue, a user was was logging into a laptop using the local admin account which was working fine. I had to create another account on the system, which was also an admin account, when this happened the 'administrator' account disappeared from the 'choose an account to login with' screen. A quick workaround is available, if the user presses ctrl+alt+delete it brings you to the screen where you can type in the username and password, so by manually typing 'administrator' it can log in. Normally this would be easily fixed, I figured the admin account had somehow been disabled from the local system, but i checked all settings and it is setup fine. The laptop is not part of a domain, so I used the management console to delete the new account and all that succeeded in doing was making the 'choose an account to log in with' screen display no accounts to choose. So far I see nothing else to fix it, the option to change the default logon screen to style where you type the username and password also seems to be missing. any ideas?

    Read the article

  • How to log Windows server share connections?

    - by sbussinger
    Can anyone make a suggestion for the best way to log connections and disconnections from Windows workstations to a Windows Server 2003 file share? We're having some issues with workstations that have a drive mapped to a server that seem to work fine for awhile and then suddenly appear to get disconnected from the server (with files open). Needless to say this causes some data corruption and error messages. It would help me to troubleshoot the problem if we could somehow monitor and log the session connections and disconnections, to attempt to correlate the connectivity issues with what actions the user is taking at the time and what the server is doing. I just haven't been able to find a way to do this. Specifically I'm talking about the same information that is displayed in the Computer Management control panel applet in the "System Tools|Shared Folders|Sessions" page. Thanks!

    Read the article

  • syslog log of TCP packet

    - by com
    Occasionally, I noticed a lot of following messsages in syslog Nov {datetime} hostname kernel: [8226528.586232] AIF:PRIV TCP packet: IN=eth0 OUT= MAC={mac} SRC={sourceip} DST={destinationip} LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=20361 DF PROTO=TCP SPT=39950 DPT=37 WINDOW=14600 RES=0x00 SYN URGP=0 On the Internet, I found that DOS attack may cause such type of output, unfortunately, I don't understand what does this log mean. The only thing is clear for me is this log is related to network. The source host is the host where nagios is installed. Does it mean nagios somehow does behave well? And what does it mean at all?

    Read the article

  • Windows 2003 to log on no matter how it turned off

    - by Arya
    I have a windows server 2003, which I use Teamviewer to connect to. When there is a power failure, and the computer restarts, it will not load Teamviewer. It gets stuck on a screen which asks why the computer was turned off. There is no keyboard or mouse connected to the server and I need Teamviewer to open every time. I don't want the screen asking why it was turned off to show up ever. I turned off Shutdown Event Tracker by doing the following: Open gpedit.msc Computer Configuration Administrative Templates System click Display Shutdown Event Tracker and then selected the disable radio button. Is this enough to prevent that screen to show up again? or is there anything else I need to change?

    Read the article

  • git log throws error "ambiguous argument"

    - by LonelyPixel
    This used to work about a year ago. Now it doesn't: git log --abbrev=6 The expected result would be all commit hashes abbreviated to 6 characters. The actual result is now this error message: fatal: ambiguous argument '6': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git [...] -- [...]' I have the impression that Git doesn't even know about that argument and tries to silently ignore its name but not the value. Using Git 1.8.1.msysgit.1 on Windows 7. Addition: Oh and it fails on other parameters, too. The entire command is: git log --abbrev=6 --format=format:"----- Commit %%h on %%ci by %%an -----%%n%%n%%B" If I just leave the abbrev part out, it still returns another error: fatal: Invalid object name 'format'.

    Read the article

  • vsftpd allow anonymous log-in

    - by user1817081
    I'm setting up a ftp server, that will allow anonymous to READ/WRITE to the server. Here is my configuration. anonymous_enable=YES local_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/xferlog xferlog_std_format=YES ftpd_banner=Welcome to blah FTP service. listen=YES pam_service_name=vsftpd userlist_enable=NO tcp_wrappers=YES no_anon_password=YES In my /var/ftp/ i set the permission to 755. When I tried to set it to 777 i got the following error, when i tried to log in. 500 OOPS: vsftpd: refusing to run with writeable anonymous root login failed. Do i need to set up anything else to allow READ/WRITE for anonymous?

    Read the article

  • OpeVPN log connecting client IPs

    - by TossUser
    I looking for the best solution to log all connecting client's ip to either a text file or a database who logs into my VPN server. Under the IP I mean the public WAN IP on the internet where they are connecting from. A hack could definitely be to make the openvpn server log to a separate logfile and run logtail periodically to extract the necessary information. So the database I want to build would look like: Client_Name | Client_IP | Connection_date roadwarr1 | 72.84.99.11 | 03/04/14 - 22:44:00 Sat Please don't recommend me to use the commercial Openvpn Access Server. That's not a real solution here. If the disconnection date could be determined that would be even better so I could see how long a client was connected and from where! Thank you

    Read the article

  • sa2 -A /var/log/sa/sa13: No such file or directory

    - by user53925
    I have systat version 7.0.2 and the /etc/sysconfig/sysstat has the entry HISTORY=27, this is on a redhat enterprise server 5.6, the cron setup for this is # run system activity accounting tool every minute * * * * * root /usr/lib64/sa/sa1 1 1 # generate a daily summary of process accounting at 23:53 53 23 * * * root /usr/lib64/sa/sa2 -A I get the following error from the cron sa2 -A find: /var/log/sa/sa13: No such file or directory, Looking at the directory /var/log/sa the files are created from sa01 through sa10 (sa1 created on sep1, sa2 created on sep2 and so on), then the rest of the files are from sa14 through to sa 31 (created from Aug 14 to Aug 31). I have not made any changes on the server so I am not sure why I am getting these error messages and is there a way to fix this?. Someone suggested creating empty files from sa11 through sa14 to fix this but I am not sure if this might mess up something .

    Read the article

  • Automate Monitor string in different log files

    - by EVIA
    I have few log files in different servers and I want to check output in the end of those log files for e.g . success: 4000 failed: 200 These logs files are getting generated daily and I have to keep track of these numbers. If there is any way I can automate this option instead of going and checking these files and wasting so much of my time. I want to create some kind of script like Go to \serverA\C$\log_07_02_2012.txt and check this line Go to \serverB\C$\log_07_02_2012.txt and check some other line. .... and it should give me output from all of these...

    Read the article

  • A simple eventbus for .net

    - by chikak
    Hello, I want to make a very simple event bus which will allow any client to subscribe to a particular type of event and when any publisher pushes an event on the bus using EventBus.PushEvent() method only the clients that subscribed to that particular event type will get the event. I am using c#.net 2.0 Any help/pointer would be greatly appreciated. Thanks Pradeep

    Read the article

  • Consumer Electronics Show (CES):CRM for High Technology Firms

    - by charles.knapp
    The Consumer Electronics Show, opening Thursday, showcases product innovations that stem from best practices in design, manufacturing, and distribution. Oracle and IBM invite you to learn best practices from peers, as well as why it matters to use CRM tailored for high technology firms -- offered only by Oracle. On Wednesday, January 5, 1-7 pm at the Bellagio Hotel Las Vegas, learn from peers at IBM, VTech, Plantronics, Cisco, Symantec, and Oracle about how to improve:Channel sales, marketing, and operations management - maximize new product introductions (NPI), sales, forecasts, training, channel promotions, and settlement Winning the deal - determine the right price for the right deal for the "perfect quote," capture the order, and manage orders Collaborative and rapid supply chain planning - improve agility, inventory turns, and profits Please join us for the Oracle/IBM CES High Technology Summit and make useful connections with your peers at the evening networking reception. Register now for this FREE event.

    Read the article

  • Watch Customer Concepts TV and Find Out How Leading Organizations Are Creating Engaging Customer Journeys

    - by Jeri Kelley
    The customer journey has changed dramatically. Customers have far more knowledge and far more power. Managing the new customer experience isn’t just about increasing profitability. For many organizations it’s about survival.  To survive, organizations must deliver relevant, personalized experiences that engage customers at each step in their journey, but where do organizations start? ??To learn more, I’m looking forward to tomorrow's Customer Concepts Web TV show.   On October 23rd, experts from Oracle and various successful businesses such as Euroffice will discuss how the customer journey has fundamentally changed and will share best practices for adapting your organization so you can truly engage customers. These Customer Concepts Web TV programs are an excellent way of keeping up with the very latest thinking in the field of customer experience.  Register for tomorrow’s event now at: http://bit.ly/RqPSL3

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >