Search Results

Search found 1134 results on 46 pages for 'cron'.

Page 38/46 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • limiting the rate of emails using python

    - by Ali
    I have a python script which reads email addresses from a database for a particular date, example today, and sends out an email message to them one by one. It reads data from MySQL using the MySQLdb module and stores all results in a dictionary and sends out emails using : rows = cursor.fetchall () #All email addresses returned that are supposed to go out on todays date. for row is rows: #send email However, my hosting service only lets me send out 500 emails per hour. How can I limit my script from making sure only 500 emails are sent in an hour and then to check the database if more emails are left for today or not and then to send them in the next hour. The script is activated using a cron job.

    Read the article

  • How can I delete a file in Sinatra after it has been sent via send_file?

    - by John Reilly
    I have a simple sinatra application that needs to generate a file (via an external process), send that file to the browser, and finally, delete the file from the filesystem. Something along these lines: class MyApp < Sinatra::Base get '/generate-file' do # calls out to an external process, # and returns the path to the generated file file_path = generate_the_file() # send the file to the browser send_file(file_path) # remove the generated file, so we don't # completely fill up the filesystem. File.delete(file_path) # File.delete is never called. end end It seems, however, that the send_file call completes the request, and any code after it does not get run. Is there some way to ensure that the generated file is cleaned up after it has been successfully sent to the browser? Or will I need to resort to a cron job running a cleanup script on some interval?

    Read the article

  • Python os.path.walk() method

    - by Aaron Moodie
    I'm currently using the walk method in a uni assignment. It's all working fine, but I was hoping that someone could explain something to me. in the example below, what is the a parameter used for on the myvisit method? >>> from os.path import walk >>> def myvisit(a, dir, files): ... print dir,": %d files"%len(files) >>> walk('/etc', myvisit, None) /etc : 193 files /etc/default : 12 files /etc/cron.d : 6 files /etc/rc.d : 6 files /etc/rc.d/rc0.d : 18 files /etc/rc.d/rc1.d : 27 files /etc/rc.d/rc2.d : 42 files /etc/rc.d/rc3.d : 17 files /etc/rc.d/rcS.d : 13 files

    Read the article

  • How to automate rsync without asking for password prompt

    - by Rijosh K
    i would like to automate the rsync task as a cron job.. since it needs the passphrase i am not able to do the cronjob. i need to specify the passphrase along with the rsync command or i will store the passphrase in a file and i will read from it. my command will look like this.. rsync -aPe "ssh -i ' . $server-{'ssh_key'} . '" ' . $server_lock_dir; so where do i put the password .. pls help

    Read the article

  • How to nightly update a copy folder of main website?

    - by HollerTrain
    I have a main website, then I have a backup of the main site in another folder on the server to be used if main site goes down I can push traffic to my copy of the main site. With that said I want to nightly update the copy site with the updated files made to the main site from the prior day's updates to text files and new images. What is best method of doing this automatically every 24 hours? I assume a cron script could be created for this need? Any help would be greatly appreciated.

    Read the article

  • Notify the user on Facebook at some time

    - by martin.malek
    Let's say I have a Facebook application which is monitoring friends birthdays. I want to notify the user that her friend will have a birthday in next two days. The first part is only cron which will check the dates, but is there any way how to notify the user? I didn't found anything for this. It was there a year ago but all of the API changes it looks like the removed all offline messages. I don't want to send an email to the user, it will be much more better to stay with everything on Facebook.

    Read the article

  • Getting rails to execute root level file edits on system files without compromising security.

    - by voxobscuro
    I'm writing a Rails 3 application that needs to be able to trigger modifications to unix system config files. I'd like to insulate the file modifications from the consumer side by running them in a background process. I've considered writing out a temp file in rails and then copying the file with a bash script but that doesn't really insulate the system. I've also considered pulling from the database manually with a cron based script and updating the configs. But what I would really like is a component that can hook into the rails environment, read out what is needed from the database, and update the config files. This process needs to be run as root because the config files mostly live in /etc/whatever. Any suggestions? Thanks!

    Read the article

  • My company is a Rackspace Cloud client (provided to us for free) and I'm trying to find some way to set up version control

    - by Nick S.
    As the title says, my (small) business is provided a free Rackspace Cloud client account. We receive a decent amount of traffic but I haven't been able to put together a business case to move to our own server yet. However, we are developing some complex apps and I'm frustrated with not having the ability to even ssh into the remote server. Ultimately, I'd like to set up some sort of version control (at this point, I'll take anything, git or otherwise). I have control over databases, can FTP, set up cron jobs, and perform a few other basic functions. I can't think of any way to set up git or something similar without ssh access. A thought went through my mind that maybe some sort of PHP version control exists that I might be able to set up, but I haven't had any luck finding it yet. Do you guys have any ideas, thoughts, or advice?

    Read the article

  • Run three shell script simultaneously

    - by user1419563
    I have three shell script which I am running as below- sh -x script1.sh sh -x script2.sh sh -x script3.sh So each script is executed sequentially one at a time after previous one finished executing. Problem Statement:- Is there any way I can execute all the three above scripts at same time from a single window? I just want to execute script1, script2, script3 at the same time. If you think of some cron JOB scheduling script1 at 3 AM, script2 at 3AM, script3 at 3AM (all three scripts at the same time, simultaneously). That's what I need, I need to execute all the three scripts simultaneously.

    Read the article

  • Howto synchronize two folders (on different servers) after an (classic) ASP upload.

    - by MaxBlack
    hi I'm using a customized backend to upload images on a server. The issue is that the client has a load balanced server so I need to synchronize the upload folders on both server. I'm not able to know which server is executing on the backend. I'm just wondering which is the best way to compare and synchronize the folders on two servers? An FTP script using the windows "cron-job"? A shell script activated by ASP? Aa ASP script that uses FTP commands? thanks

    Read the article

  • rkhunter 1.4 different results than version before?

    - by dschinn1001
    with rkhunter version before ubuntu-update from 12.04 to 12.10 I had NOT these warnings like listed here: Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ Warning ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/tcpd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/curl [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ Warning ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ Warning ] /usr/bin/locate [ Warning ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ Warning ] /usr/bin/lynx [ Warning ] /usr/bin/mail [ Warning ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ Warning ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ Warning ] /usr/bin/rkhunter [ Warning ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ Warning ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/gawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ Warning ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ Warning ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ Warning ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ Warning ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ Warning ] /bin/dash [ Warning ] It seems that rkhunter 1.4 is oversensitive somehow about changed bin-files ? chkrootkit finds nothing and no warnings too.

    Read the article

  • Designing Mobile SMS text advertising system

    - by Ramraj Edagutti
    Currently, I am working on a product where we have an SMS text advertising system, and using this, we setup advertising campaigns for clients, and later these campaigns are sent to the end users. This is very similar to Google Adwords, but targeted to Mobile users via SMS. Just to give an overview of the system Each Campaign is mapped to an advertiser Campaign has start date and end date Campaign has a filter condition(s) or query to select the target user base from our database (to whom we send Campaigns) Target user base can be fixed, for e.g send campaign to 10000 users Target user base can also be dynamic based on query condition, for e.g send campaign to users who are active and from a particular state, district, town etc. (this way user base will be keep changing on daily basis) Campaign can have multiple campaign messages Each campaign message has start date and end date Each campaign message can have multiple message texts for different locales, for e.g English,Hindi,Telugu etc After creating an advertisement campaign, we run daily night job to provision the target user base for that a particular campaign in a separate table, and another daily job runs on morning times and checks provisioned table for campaigns and targeted users and sends the campaign to users via SMS. Problem is, current UI for creating advertising campaigns is designed in a very technical manner, I mean, normal user or business owner or clients can not use the UI to create a campaign. Below are reasons why the UI is very technical in nature Filter condition(s) or query input filed, takes user ids or mobile numbers or SQL queries. Most of times or almost every time, we use big SQL queries So we end up storing SQL queries in a database for a campaign, later we use this SQL query to fetch targeted user base. For scheduling these campaigns, we have input filed on UI which takes quartz cron expression(s) ( for e.g. send campaign on "0 0 9 1-10 MAR 2012" ), again very technical in nature Normal user or business owner, can not use the UI for creating campaigns for reasons mentioned above, Currently, we ourself (developers) helping clients to setup/create campaigns. we are trying to re-design the UI to make it more user friendly so that any user can go to UI and create an advertisement campaign by himself. I am thinking of re-designing the current UI similar to Google Adwords interface, especially for selecting target users based on user geography like country, state, city etc. I also need to select users based user subscription(s), which might make system even more complex. And also, for campaign scheduling, I am thinking of using weekdays with hours. For example, I will shows Monday to Sunday on UI, and user can select the from hours, to hours etc. Any better ideas or suggestion on how to design UI in very user friendly manner and what design should be followed on server side code (we write backend code on java/jpa/spring/quartz)? And I am looking for ideas or design patterns on how to build SQL queries (using JPA/Hinernate) programmatically on server side, based on varies conditions like based on country, state, town, village, and user subscriptions.

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Ubuntu 12.04 - PPTP VPN is the only Internet Access

    - by user212553
    I know this has been covered. I've read dozens of posts but still have questions. I have a work server whose traffic should never leave my house without encryption. The VPN is PPTP. Currently I have a cron job that checks the status of the ppp0 adapter each minute. If the connection drops, which it does fairly often, it shuts key components down. It's fairly easy to restart PPTP with "nmcli con up id 'myVPNServer'" but there's no assurance it will reconnect and I need a better way to stop traffic (other than killing apps) when ppp0 is down. The two options I've seen discussed are the firewall (UFW, Firestarter, IPTables) or the route tables. I could be easily swayed to consider the firewall option but I focused on the route tables since no new function needs to be started. My questions involve the way the route tables change and then specifics on rules. When I start the PPTP VPN the route tables change. That suggests that if the VPN drops, the table will change back, defeating my stated intent of preventing external traffic. How can I make "sticky" changes to the route table that will persist even if the VPN connection drops? Perhaps the check boxes "Ignore automatically obtained routes" or "Use this connection only for resources on it's network" (which are part of the VPN configuration options)? It would seem that, if I can force the active VPN route table to stay in effect, even when the VPN drops, that this will effectively kill any external traffic should the VPN drop. This will give me the latitude to run a routine to restart the VPN from the command line (assuming the route table rules don't prevent me re-establishing the connection). My route table, with the VPN active is (ip route list): Any comments on what 10.10.1.1 is? $ ip route list default dev ppp0 proto static 10.10.1.1 dev ppp0 proto kernel scope link src 10.10.1.11 VPN_Server_IP_Address via 192.168.1.1 dev eth0 proto static VPN_Server_IP_Address via 192.168.1.1 dev eth0 src 192.168.1.60 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.60 metric 1

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • Ubuntu virtual memory caches suck up memory

    - by Tom
    Hey all, I've got an Ubuntu 9.10 64-bit server that seems to use up all available memory. According to my munin graphs, almost all of the memory used up is in the swap cache, cache, and slab cache. (I take this to mean virtual memory caches, am I right in assuming this?) Once memory usage approaches 100%, some (although not all) system services such as SSH become sluggish and unresponsive. After rebooting the system, performance and memory usage become normal for a time. Some interesting tidbits: The system runs Apache 2, MySQL, Munin, and sshd. The memory usage spikes happen at the same time every night (at 10 PM sharp.) There appears to be nothing in the crontab for any of the users, and nothing in /etc/cron.d/* out of the ordinary, let alone something that would occur at 10 PM. My question is, how do I figure out what is causing the memory suckage? I've tried the usual utilities (e.g. ps, top, etc) but I can't seem to find anything unusual. Any ideas? Thanks in advance!

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • wget not working with domain on local machine

    - by user568829
    Basically - I have some PHP scripts that need to be run as cron jobs. Lets say the script needing to be run is: http://admin.somedomain.com/cron_jobs/get_stats If I run the script from the local machine it gives me a 404 Not Found error. So I entered the following into /etc/hosts XX.XX.XX.45 admin.somedomain.com Now wget works fine from the local machine to that domain. However when I restart Apache that domain no longer works. Here is the config for that site in /etc/apache2/sites-available NameVirtualHost XX.XX.XX.45:80 <VirtualHost XX.XX.XX.45:80> ServerName admin.somedomain.com DocumentRoot /var/www/admin.somedomain.com/ <Directory "/var/www/admin.somedomain.com"> allowoverride all Options Indexes order deny,allow allow from all </Directory> ErrorLog /var/log/apache2/admin.somedomain.com-error_log CustomLog /var/log/apache2/admin.somedomain.com-access_log combined </VirtualHost> It just goes to the default site config showing "It Works". If I take out that setting in /etc/hosts and restart apache the website at that domain works fine again. Can anyone point me in the right direction here? Thanks

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Puppet classes out of order despite explicit arrow operator use

    - by Alexandr Kurilin
    Absolute puppet beginner here. I'm experiencing an interesting behavior with my puppet manifests and would love to know what I'm doing wrong. Let's for example say I'm configuring the instance with the following ordered classes: class { 'update_system': } -> class { 'facter': } -> class { 'user_sshkey': user => 'ubuntu', type => 'rsa', } -> class { 'tmux': user => 'ubuntu', } -> class { 'vim': user => 'ubuntu', } -> class { 'bashrc': user => 'ubuntu' } -> notify {"Configuring DB role":} -> class { 'postgresql': } when I run the manifest with the --debug switch, by looking at notify statements I can see the classes be executed in the following order: 1. update_system starts 2. a cron type inside of postgresql class (the very **last** class in that ordered list above) is executed 3. postgres::install starts 5. facter starts installing 6. postgres::configure and postgres::service start 7. the vim class is executed 8. "Configuring DB role" notification is made. All the way at the end here. etc Basically the thing is all over the place, the order doesn't seem to follow the arrow operators in any way. I'm guessing I'm missing something here that would force the classes to execute one at a time. Could it be that I'm missing some kind of anchor pattern here? Invalid containment?

    Read the article

  • Sendmail delivering locally instead of to MTA in MX record

    - by CreativeNotice
    Ok, so I've got a box named websrv1.mydomain.com. It's a web server running ubuntu, apache2, sendmail, etc. My email is outsourced to a third party. So in my DNS I've got MX set to mx.thirdparty.net. I've no reason to accept incoming mail on my web server, every email should be sent to the third party. This works correctly accept with sending mail from the webserver (aka via cron or console). So from my web server, if I send an email to [email protected], it just disappears. No errors, nothing in dead.letter, nothing. I can send to any other address with no issues. If I send to [email protected] it's delivered locally which is fine. 1) Doing an nslookup shows the mx record is correct. 2) Running /mx mydomain.com from sendmail -bt returns the correct result. 3) Running sendmail -bv [email protected] returns: sudo sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host mydomain.com., user [email protected] 4) Running 3,0 [email protected], returns: 3,0 [email protected] canonify input: me @ mydomain . com Canonify2 input: me Canonify2 returns: me canonify returns: me parse input: me Parse0 input: me Parse0 returns: me Parse1 input: me MailerToTriple input: me MailerToTriple returns: me Parse1 returns: $# esmtp $@ mydomain . com . $: me parse returns: $# esmtp $@ mydomain . com . $: me So I'm at a loss. Sendmail seems to see the mx record, but it's not using it.

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >