Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 75/123 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Exim queue in WHM

    - by Xobb
    Hi fellas, I've got the centos server with WHM. The mail server is exim. I need exim put all messages in queue and not sending directly.Though I've added the queue_only option to exim configuration and the messages are collected in the queue now. Afterwards I've found out that someone is calling exim -q to process the queue every once in a while. I've found the following cron job: 0 6 * * * /scripts/exim_tidydb > /dev/null 2>&1 which I beleive has been used to process the exim queue. Also I suspect that script was installed alongside with WHM. Surely I've commented it out and was expecting everything to work just fine. But that didn't happen. I still get the exim queue processed once in a while. Am I missing anything? What may cause my exim queue to process? Here is cat /etc/exim.conf | grep queue queue_only deliver_queue_load_max = 3 Thanks

    Read the article

  • iptables logging to diferent file via syslog-ng

    - by rahrahruby
    I have the following configuration in my iptables and syslog files: IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 222 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT -A INPUT -j DROP -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 SYSLOG-NG destination d_iptables { file("/var/log/iptables/iptables.log"); }; filter f_iptables { facility(kern) and match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news) and not filter(f_iptables); }; log { source(s_src); filter(f_iptables); destination(d_iptables); };` I restart syslog-ng and the log is not written.

    Read the article

  • logrotate by size outside the daily schedule

    - by Josh Smeaton
    We have a couple of applications that generate huge log files. It's not enough to rotate those logs daily, so I created the following logrotate conf: /var/log/ourapp/*log { compress copytruncate missingok size 200M rotate 10 } The idea is that we can keep 2GB of logs for this one application, no matter how quickly those files are filling up. The problem, though, is that logrotate only runs once daily. AFAIK, when logrotate kicks off at 4am, it will check to see that the size is at least 200M and rotate it if so. Ideally logrotate would run every minute, check the size, and rotate if the size is greater. Is there a standard way for rotating based on size outside of the daily cron schedule?

    Read the article

  • Is there a lightweight MTA for Ubuntu 9.10 Desktop?

    - by Joe Casadonte
    I'm writing a Perl script to run as a cron job, and I want to email results & errors to a local account on the laptop. I'd like something that can talk SMTP (do any MTAs not adhere to SMTP?). I use Thunderbird 3, so I'll also need a POP/IMAP server (unless T-Bird can read straight from an mbox file; I'll have to check into that). No need for spam controls as I'll lock it down real tight, only accepting mail originating from the laptop itself. Thanks!

    Read the article

  • How to send local mail via postfix to another server

    - by scotts
    I have postfix set up for use as an STMP server to send mail for my domain, but I receive email for the domain via gmail/google apps. The reason I use my own postfix is because I send out a lot of transactional emails to my customers using a web app, and the volume would exceed what google allows with its SMTP servers. Everything works fine, except cron & system mail gets routed to local users on the server, not to the appropriate accounts at google mail. How can I route this system mail to the respective google mail accounts instead?

    Read the article

  • Syslog permissions

    - by Niels Kristian
    I'm using the $InputFile facility in rsyslog to monitor various log files scattered around my ubuntu 12.04 server. E.g. nginx, unicorn, rails, postgres, cron etc. Now my problem is, that some of these log files are created with -rw-r----- right, so rsyslog doesn't have read rights. Since I install most of the programs using apt-get, and therefore didn't change anything from default. So, in other words, I would like not to modify every singe log file / daemon to have the right permissions, if I instead could give syslog read access to all of them at once. But the question is - can I do that, and is it the "right thing to do"?

    Read the article

  • WEB based HPC cluster node management

    - by Skuja
    Hello, i am working on my school diploma thesis. The main goal is to create web based application where logged users could see free and busy nodes, turn them on and off, see what process they are running etc. Figured out that i could do something like this - write some cron daemon that would run every 30seconds or so, and it could run ping utility for each node to find out if it is on or off, then write results to some file. Then from my web app (i will write in PHP) i could read the info. Will it be a good solution? How would you suggest me to do it? And finally, is there any existing solutions (it may not be a definetly ewb based) for managment of cluster nodes?

    Read the article

  • Unix shell script to monitor a process [on hold]

    - by SIJAR
    I have to make sure that one process in server never goes down. So I'm think of cron vs daemon. Please provide me a good example of unix shell scripts that will run as daemon process. I'm trying to avoid the non sense of permission issue for crontab. Also there is not much good resource on the web for this. Will this daemon process automatically start during a server/system restart. If not how can I achieve this?

    Read the article

  • runas without asking for a password

    - by Gregory MOUSSAT
    On a Windows server which is in a domain, I have a script I run from scheduled tasks. I want this script to be run under a mydomain\peter user account. It is simple to do it with scheduled tasks, if you know Peter's password. And once done, the script stops when Peter decides to (or has to) change his password. On Linux, a cron job can be run with whatever user account without having to know the corresponding password. And root can run anything on behalf on another user (with su and sudo). Any way to do this with Windows? My need is for a old Windows 2003 server, but I can manage to run it from another computer.

    Read the article

  • Dell Dimension running Fedora12 does a "Sleeping Beauty" and I am not am not a "handsome prince"!

    - by Jim Dobbs
    Dell Dimension 2350 with a Pentium IV processor and integrated video and network chips running Fedora12 does a "Sleeping Beauty" and I, apparently, am not am not a "handsome prince"! The system puts video and network to sleep and it will not wakeup. I have heard of this problem on laptops, but this is a tower. Any ideas or help is appreciated. I tried to ping the network card from another system and ping fails. The logs indicate that the system continues to be active. Pressing keyboard short-cut keys makes the disk light blink but neither the video or network card comes alive. Failing all else, are there any Linux commands that I could schedule in cron to pulse video and network adapters hourly that will keep them awake? Or, should I wait on Fedora13? Before this machine, I built a Dimension 2400 with Pentium IV and it had the same problem. Fedora9 on the same hardware is fine.

    Read the article

  • rsync bash script to backup specific directories nightly to remote server

    - by Janice Young
    Hello, I am looking for a rsync script that will backup specific directories from my home machine to a remote server nightly. So say: /home/me/Pictures to ssh -p 6587 [email protected]/Pictures. It would be nice if it can look for changes but im not worried so much about the changes aspect is having a script that runs at a certain time of night with cron or however. I have googled and found scripts but those scripts were specific to the operations of those creators. Any help would be happily accepted as the scripted part really throws me off. Thank you, Janice

    Read the article

  • Free DNS software with failover support?

    - by Lin
    I'm looking for DNS software that can accomplish the following: Check health of all A records at set intervals If server is unresponsive after multiple successive checks, replace A record with a working server When a server is down, check it periodically. Once it's up, restore normal A records Here's an equivalent I thought of: Run DNS servers with very low TTL (minutes) Use a cron job to periodically query all webservers Use sed to replace A records if need be, and then restart DNS server I have a hard time believing there isn't already something that can accomplish the above. I'm not looking for a paid service, and I'm restricted to anything I can run with root access to a VPS. Any suggestions would be great. Thanks!

    Read the article

  • How to install and Configure MTA on Linux [closed]

    - by Umair Mustafa
    I need to know which MTA's is better and simple to handle and configure in linux. As I need to run a script that will send me the output of that command whenever it will run using cron. Ok the case is this. Every day I have to manually check the Disk space of server which are more than 30 which is headache and have to document that. So I will simply add the follwing command DF- H and the output of this command should be send on my email. So now IF u got the story then tell me what MTA is better sendmail, postfix and some instructions on HOW TO INSTALL and CONFIGURE it. And after configuring the How do I add the DF -H so that it will start seniding me the output on my email. Thanks in advance.

    Read the article

  • username and password for rsync in script

    - by sims
    I'm creating a cron job to keep two dirs in sync. I'm using rsync. I'm running an rsync daemon. I read the manual and it says: RSYNC_PASSWORD Setting RSYNC_PASSWORD to the required password allows you to run authenticated rsync connections to an rsync daemon without user intervention. Note that this does not supply a password to a shell transport such as ssh. USER or LOGNAME The USER or LOGNAME environment variables are used to determine the default username sent to an rsync daemon. If neither is set, the username defaults to 'nobody' I have something like: #!/bin/bash USER=name RSYNC_PASSWORD=pass DEST="server::module" /usr/bin/rsync -rltvvv . $DEST I also tried exporting (dangerous, I know) USER and RSYNC_PASSWORD. I also tried with LOGNAME. Nothing works. Am I doing this correctly?

    Read the article

  • How to find date/time used by Cassandra

    - by JDI Lloyd
    Earlier this morning I noticed that one of the nodes in our Cassandra cluster is writing logs an hour in the future, despite the date/time being correct on the OS. A couple of other nodes I checked via logs appear to be writing logs at the correct time. I now need to go through and check each node in our 80 node cluster and ensure cassandra is running on the correct time, problem being is some of the nodes don't write to the logs very often as they aren't doing much... the question is, is there some form of tool/utility (ie nodetool) that can tell me the time that cassandra is running on? All the systems date/times are correct, ntpdate cron in place has been for a while. Servers are set to Belize timezone to avoid DST changes so its nothing to do with that.

    Read the article

  • OOM-Killer called every now and then..

    - by SpyrosP
    Hello there, i have a dedicated server where i've installed apache2, as well as Rails Passenger. Although i have 2GBs of RAM and most times about 1,5GB is free, there are some random times when i lose ssh and generic connectivity because oom-killer is killing processes. I suppose there is a memory leak but i cannot find out where it comes from. oom-killer kills apache2, mysql, passenger, whatever. Yesterday, i did a "cat syslog | grep -c oom-killer" and got 57 occurences ! It seems that something seriously destroys the memory. Once i reboot, everything comes back to normal. I suspect that it can be related to Passenger, but i'm still trying to figure it out. Can you think of anything else, or do you have anything to suggest that will make the leak identification procedure easier ? (i was even thinking of writing a bash script, to be run with cron for like every 5 minutes).

    Read the article

  • Log "date -s" command

    - by LinuxPenseur
    Hi, I know that the date -s <STRING> command sets the time described by the string STRING. What i want is to log the above command whenever it is used to set the time into the file /tmp/log/user.log. In my Linux distribution the logging is done by syslog-ng. I already have some logs going into /tmp/log/user.log. This is the content of /etc/syslog-ng/syslog-ng.conf in my system for logging into /tmp/log/user.log destination d_notice { file("/tmp/log/user.log");}; filter f_filter10 { level(notice) and not facility(mail,authpriv,cron); }; log { source(s_sys); filter(f_filter10); destination(d_notice); }; What should i do so that date -s command is also logged into /tmp/log/user.log

    Read the article

  • Dell Dimension running Fedora12 does a "Sleeping Beauty" and I am not a "handsome prince"!

    - by Jim Dobbs
    Dell Dimension 2350 with a Pentium IV processor and integrated video and network chips running Fedora12 does a "Sleeping Beauty" and I, apparently, am not am not a "handsome prince"! The system puts video and network to sleep and it will not wakeup. I have heard of this problem on laptops, but this is a tower. Any ideas or help is appreciated. I tried to ping the network card from another system and ping fails. The logs indicate that the system continues to be active. Pressing keyboard short-cut keys makes the disk light blink but neither the video or network card comes alive. Failing all else, are there any Linux commands that I could schedule in cron to pulse video and network adapters hourly that will keep them awake? Or, should I wait on Fedora13? Before this machine, I built a Dimension 2400 with Pentium IV and it had the same problem. Fedora9 on the same hardware is fine.

    Read the article

  • Dell Dimension 2350 with a Pentium IV processor and integrated video and network chips running Fedor

    - by Jim Dobbs
    Dell Dimension 2350 with a Pentium IV processor and integrated video and network chips running Fedora12 does a "Sleeping Beauty" and I, apparently, am not am not a "handsome prince"! The system puts video and network to sleep and it will not wakeup. I have heard of this problem on laptops, but this is a tower. Any ideas or help is appreciated. I tried to ping the network card from another system and ping fails. The logs indicate that the system continues to be active. Pressing keyboard short-cut keys makes the disk light blink but neither the video or network card comes alive. Failing all else, are there any Linux commands that I could schedule in cron to pulse video and network adapters hourly that will keep them awake? Or, should I wait on Fedora13? Before this machine, I built a Dimension 2400 with Pentium IV and it had the same problem. Fedora9 on the same hardware is fine.

    Read the article

  • How do I prevent libvirt from adding iptables rules for guest NAT networks?

    - by Jack Douglas
    Similar to this old request on BugZilla for Fedora 8, I'm hoping something has changed since then or someone knows another way. I want to manage the iptables rules by hand—the one-size-fits-all automatic rules don't suit me at all. These rules seem to be added and removed when a network is started and destroyed. Is there a way of either preventing these rules being added at all or hooking a script into the network start that restores the default rules afterwards. For now, I'm using a very crude method with cron, but I hope there is a better way: * * * * * root iptables-restore < /etc/sysconfig/iptables

    Read the article

  • Simplest way to expose UNIX mailboxes via IMAP or POP3 on RHEL 5.6

    - by db2
    We've got a web server running RedHat Enterprise Linux 5.6, and it has all the usual local UNIX mailboxes. As is typical, the root mailbox gets all the cron output, logwatch results, etc. I'd like an easier way to keep an eye on this mailbox besides running mail via ssh. What should I install/enable to allow access to these system mailboxes via IMAP or POP3 with minimal fuss? Either protocol would be fine for what I'm doing, as I could then add it as an account in Outlook. A bit of searching led me to cyrus-imapd and dovecot, but it seems like they are meant for more serious mail hosting operations. Either they use their own mailbox system exclusively, or don't have a simple way of making the UNIX mailboxes available. If I'm wrong about that, then I'm fine with using either of them as long as I can get to the mailboxes of the existing accounts on the box.

    Read the article

  • VMWare Server lck file keeps coming back

    - by muncherelli
    I am running VMWare Server 2.0 on a Debian Lenny system as a host OS. I am getting this error when I try to start a Virtual Machine Cannot open the disk '/var/lib/vmware/Virtual Machines//.vmdk' or one of the snapshot disks it depends on. Reason: Failed to lock the file. So I looked around on the web and found that I need to delete the .lck folder and file in order to get this error This seems to happen any time I reboot my Debian Server. The Virtual Machines sometimes do not recover and this lck file is causing problems. Should I create a cron script that does a rm *.lck on each of my machines on reboot? Looking for any direction on how to resolve this. It seems when i do a "reboot" command it is maybe not gracefully shutting down the VMware containers so the lock files are still intact?

    Read the article

  • Check IMAP via PHP for mesages in order to interact with them via PHP?

    - by Roger
    I use Google to administer my e-mails and I run Nginx + PHP. If I want to check incoming mail to interact with each message subject (to trigger an action), as far as I know, I have only two options: 1) access Google via IMAP PHP / cron job or 2) forward the messages to my server where I can have a subdomain set to pipe the messages to a PHP script. Frankly, I'd preffer the first option because I have already all the IMAP functions tested and done. And if the second solution is really the best, I'd preffer to use Postfix. However, id like some light grom somebody who has already browsed this unknown waters. Thank you.

    Read the article

  • Command line audio library manager for Linux

    - by Ketil
    Hi all Hear is my set-up, I have a Linux server that is running Music Player Demon, all the audio files are under a dir (/muzik) which is exported by NFS. So to add files to the MPD database, I just drop the files into the /muzik NFS share and up date the MPD db, so far so good, but I would like to keep the dir strucher belowe /muzik in sum sort of order. To achieve this I am using Amarok, wich a start on my laptop and then use the organise files command to sort the files in into a sensible dir strucher based on the tags in the files. Do you know of any command line utility that can do the same thing that I am using Amarok for so I can run it from cron on the server and automate the process? I hope that this make sense.

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >