Search Results

Search found 3615 results on 145 pages for 'cron daily'.

Page 89/145 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • How to backup data on debian vps to dropbox?

    - by IBr
    I have really simple private VPS with some webpages and music server. I want to backup some configs and some scripts to dropbox or similar service. Server has no gui (except simple ssh X forwarding, which is neither convenient for constant usage and does not provide full desktop) everything is controlled through ssh. So my question would is it possible to setup dropbox client for command line use? How? Is there any alternatives for dropbox, which would have command line clients? Also is it possible to incorporate backup into script for cron job?

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • What's going on with my server? High load, lots of idle CPU time, low disk utilization

    - by Jonathan
    I run a web site and send a legitimate opt-in, daily email newsletter to subscribers. Both the web hosting and email sending are done by the same machine. I have about 100,000 subscribers who have opted in to my daily email newsletter. My PHP script did a pretty good job sending mail to all of them until fairly recently, but as the list has grown I can't keep up. When I run top, I have very high load--usually at least 6 or 7, sometimes as high as 15--even though I only have two CPUs. However, when I run sar, my CPU is idle an average of about 30% of the time. So, it seems I'm not CPU bound. When I run iostat, it seems as though I'm not disk bound because my %util for each device is very low (no more than 5%). Given that I don't seem to be CPU bound or disk bound, why is top reporting such high load? Additionally, since I don't seem to be CPU bound or disk bound, why is my email sending script not able to keep up? Here's what I see when running top: top - 11:33:28 up 74 days, 18:49, 2 users, load average: 7.65, 8.79, 8.28 Tasks: 168 total, 5 running, 162 sleeping, 0 stopped, 1 zombie Cpu(s): 38.9%us, 58.6%sy, 0.8%ni, 0.0%id, 0.7%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 3083012k total, 2144436k used, 938576k free, 281136k buffers Swap: 2048248k total, 39164k used, 2009084k free, 1470412k cached Here's what I see when running iostat -mx: avg-cpu: %user %nice %system %iowait %steal %idle 34.80 1.20 55.24 0.37 0.00 8.38 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.19 71.70 1.59 29.45 0.02 0.07 5.90 0.55 17.82 1.16 3.59 sda1 0.00 0.00 0.00 0.00 0.00 0.00 7.10 0.00 13.80 13.72 0.00 sda2 0.05 50.45 1.13 24.57 0.01 0.29 24.25 0.35 13.43 1.15 2.97 sda3 0.05 10.17 0.20 2.33 0.01 0.05 43.75 0.05 20.96 2.45 0.62 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 70.50 70.50 0.00 sda5 0.07 0.22 0.03 0.07 0.00 0.00 32.84 0.08 856.19 8.03 0.08 sda6 0.02 5.45 0.03 0.72 0.00 0.02 67.55 0.02 26.72 5.26 0.39 sda7 0.00 1.56 0.00 0.42 0.00 0.01 38.04 0.00 8.88 5.84 0.24 sda8 0.01 3.84 0.20 1.35 0.00 0.02 28.55 0.05 31.90 4.08 0.63 Here's what I see when running sar: 09:40:02 AM CPU %user %nice %system %iowait %steal %idle 09:50:01 AM all 30.59 1.01 49.80 0.23 0.00 18.37 10:00:08 AM all 31.73 0.92 51.66 0.13 0.00 15.55 10:10:06 AM all 30.43 0.99 48.94 0.26 0.00 19.38 10:20:01 AM all 29.58 1.00 47.76 0.25 0.00 21.42 10:30:01 AM all 29.37 1.02 47.30 0.18 0.00 22.13 10:40:06 AM all 32.50 1.01 52.94 0.16 0.00 13.39 10:50:01 AM all 30.49 1.00 49.59 0.15 0.00 18.77 11:00:01 AM all 29.43 0.99 47.71 0.17 0.00 21.71 11:10:07 AM all 30.26 0.93 49.48 0.83 0.00 18.50 11:20:02 AM all 29.83 0.81 48.51 1.32 0.00 19.52 11:30:06 AM all 31.18 0.88 51.33 1.15 0.00 15.47 Average: all 26.21 1.15 42.62 0.48 0.00 29.54 Here are the top handful of processes listed at the particular time I happened to run top -c: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8180 mysql 16 0 57448 19m 2948 S 26.6 0.7 4702:26 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/bristno.pid --skip-external-locking 26956 brristno 17 0 0 0 0 Z 8.0 0.0 0:00.24 [php] <defunct> 26958 brristno 17 0 94408 43m 37m R 5.0 1.4 0:00.15 /usr/bin/php /home/brristno/public_html/dbv.php 22852 nobody 16 0 9628 2900 1524 S 0.7 0.1 0:00.17 /usr/local/apache/bin/httpd -k start -DSSL 8591 brristno 34 19 96896 13m 6652 S 0.3 0.4 0:29.82 /usr/local/bin/php /home/brristno/bin/mailer.php 1qwqyb6 i0gbor 24469 nobody 16 0 9628 2880 1508 S 0.3 0.1 0:00.08 /usr/local/apache/bin/httpd -k start -DSSL 25495 nobody 15 0 9628 2876 1500 S 0.3 0.1 0:00.06 /usr/local/apache/bin/httpd -k start -DSSL 26149 nobody 15 0 9628 2864 1504 S 0.3 0.1 0:00.04 /usr/local/apache/bin/httpd -k start -DSSL

    Read the article

  • Is there a software package that safely allows SSH via web on simple web host?

    - by spoulson
    I want to be able to use a secured web page on my shared web host to make SSH connections out to any destination. A shared web host is cheap and easy to maintain, and usually allows ssh to the web server. There are times I'd like to ssh into my web server, but don't have direct ssh connectivity. I'm aware of consoleFISH, Ajaxterm, and Anyterm. The problem is consoleFISH is a man-in-the-middle by design, and Ajaxterm/Anyterm require running a daemon process on the hosting server. Web hosts can usually support cron jobs, but not continuously running daemon processes. Additional Apache modules are usually out, too, as they require reconfiguration of the server and affects all other customers. Are there any software packages out there I can run on my shared web hosting account that provide a true ssh experience with these limitations?

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • Solution to Manage and Monitor (Ubuntu) Machines

    - by Elmar Weber
    I'm looking for a tool like Canonical (system management and monitoring for Ubuntu) that is Open Source and free. The goal is to manage a dozen or so KVM machines for private testing purposes. I know of puppet and munin or RHQ as separate tools to manage and monitor, but I'd prefer something integrated. Any tips? Basic requirements would be: system package management and update (individual selection for each managed node) configuration of basic system services (Users, NFS, cron, ideally also Apache) monitoring (charting of system resources, disk, io, memory, etc) and alerting, ideally a default configuration with sensible values for alerts

    Read the article

  • How can I update Firefox add-ons automatically?

    - by Maelstrom
    Similar to this question, is it possible to update installed plugins via the command line? I'm running YSlow with beacon reporting as a nightly cron job under OSX: /Applications/Firefox.app/Contents/MacOS/firefox-bin -no-remote -P YSlow http://www.example.com/ & PID=$! sleep 300 kill $PID This dumps FF into the background and grabs the PID, waits 300 seconds (for the page to load) then kills it. If there is an update pending, the browser "hangs" waiting for a confirmation. If I do click on the "install updates" link, everything works and then Firefox launches a new process - the $! returned by the shell is no longer valid. Can I update a plugin from the command line without confirmation? Can I curl the XPI into a file and install it without confirmation?

    Read the article

  • How long do uploaded files stay in the tmp folder in Linux Ubuntu?

    - by Jean-Nicolas Boulay Desjardins
    I am building a web application where my users will be able to upload files. After the files are uploaded I need to send the files to two other servers, and after they will be deleted from the server where they were just uploaded to. I am wandering is it a good I idea to keep the uploaded files in the tmp/ folder the time the uploaded files are sent to the other two servers or should I move them to another folder incase they get deleted? I am also wandering because I would like to know if I have to build a cron script to get rid of the files that have been transfered to the other servers so that I get my disk space back.

    Read the article

  • How to delete files quicker than rm -rf?

    - by Byakugan
    Is there any way how to delete folder/files quicker than with command rm -rf? It seems my disc is filled with bilions of files (sessions of php5) which were not deleted in cron so I need to delete them manually but it takes hours and it is still not helping reducing the amount. Thank you. My command: rm -rf /var/lib/php5/* Tried also these commands: find /var/lib/php5 -name "sess_*" -exec rm {} \; And perl -e 'chdir "/var/lib/php5/" or die; opendir D, "."; while ($n = readdir D) { unlink $n }'

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • Is there a lightweight MTA for Ubuntu 9.10 Desktop?

    - by Joe Casadonte
    I'm writing a Perl script to run as a cron job, and I want to email results & errors to a local account on the laptop. I'd like something that can talk SMTP (do any MTAs not adhere to SMTP?). I use Thunderbird 3, so I'll also need a POP/IMAP server (unless T-Bird can read straight from an mbox file; I'll have to check into that). No need for spam controls as I'll lock it down real tight, only accepting mail originating from the laptop itself. Thanks!

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • Backup Script - Could Not Open Input File

    - by Iestyn
    this is the backup script that I've got going: http://pastebin.com/4g4E6wUz This is the cron info: /usr/local/bin/php /home/backups/backup-db.php --filename-dated ALL No matter what I do, I keep on getting this error: Could not open input file: /home/backups/backup-db.php - That's the correct location of the file. I just don't know what else to try, I feel I've been working on this for so long now that I've explored every avenue, on the other hand sometimes I think that the time I've spent on it is clouding my thoughts and I'm missing something stupidly obvious. Just wondering if someone can give me a few pointers? Also on a last note, does anyone know of a way/article to auto generate a full backup of cPanel every * amount of days and store it in a location that I want? Kind Regards.

    Read the article

  • automated printouts from a wireless printer

    - by Piotr
    I have a wireless printer which is always on, and an always on fanless linux server. Looking at the mprinter project on Kickstarter I started to wonder if there is an app somewhere in the internet already that will allow to prepare an automated daily printout based on some settings. things to be printed could include - weather forecast for my locations, todo`s scheduled for that day, a "quote of a day" or "word of the day", stats from google analytics for my site, and many more ... I would set a printout at 6:15 every work day so its on my printer when I am already up, having a coffee. anyone knows something that can be used for such purpose? While I know this can be done by combining the power of TeX, cron and a script language to manage the dynamic part of the PDF I believe this is a use case someone has already addressed.

    Read the article

  • chroot for unsecure programs execution

    - by attwad
    Hi, I have never set-up a chroot-jailed environment before and I am afraid I need some help to do it well. To explain shortly what this is all about: I have a webserver to which users send python scripts to process various files that are stored on the server (the system is for Research purpose). Everyday a cron job starts the execution of the uploaded scripts via a command of this kind: /usr/bin/python script_file.py All of this is really insecure and I would like to create a jail in which I would copy the necessary files (uploaded scripts, files to process, python binary and dependencies). I already looked at various utilities to create jails but none of them seemed up-to-date or were lacking solid documentation (ie. the links proposed in How can I run an untrusted python script) Could anyone guide me to a viable solution to my problem? like a working example of a script that creates a jail, put some files in it and executes a python script? Thank you very much.

    Read the article

  • User account automatically filling up with dead.letter file

    - by jeroen
    I have one user account on a server with about 400 accounts that is filling up automatically. The dead.letter file in the users home directory automatically grows until the account is full (about 10 - 40 Mb per day). The user is using Microsoft Outlook to send and receive mail. What can be causing this and how can I avoid it from happening? Right now I have an emergency cron-job to delete the file but I would like "real" solution. Edit: The server version is Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Edit 2: It seems mainly spam and I see different mailer headings (from php to Outlook Express) and a frequent appearing header is [email protected] Update: I have asked the hosting provider where I use that dedicated server to look into the problem as well, as it's their Control Panel that could be a cause of the problem.

    Read the article

  • windows VPS running apache and mysql, php scripts running slow.. but cpu usage is 1-3%..

    - by Roeland
    So every night I run some cron jobs. It requires probably about 20 min to process all the records. I gather the script does something like 10,000 sql queries. I figure this task was just that intense and needs time to complete, but I looked at CPU and memory usage, and it is super low. Cpu usage is between 1-3% and once in a while will bounce to 50ish for 2-3 seconds. This VPS is running windows 2003 server with Apache and MySQL. Does this sound right?

    Read the article

  • vps running out of memory, 200MB free

    - by demon
    At the beginning of this year I took a VPS for my website because I was running against the resource limits from a shared hosting. Here are the things I know: 2GB memory, with 1GB swap Debian X64 server ED installed Software running on the webserver: mysql apache postfix pop3 imap amavisd clamd cron fail2ban munin-node pure-ftpd spamd nginx Now for the setup: Nginx listens on port 80 and handles the static files, the php side is done by apache2 running mod_php in combi with apc(no var caching!). Iam using a pretty 'busy' drupal and phpbb stack on the server, for drupal iam using boost and authcache to handle of the server load with a pressflow stack. phpbb is just phpbb3 with some mods installed, but has at max 30 users online at a time.. The problem is that its staring to use the swap after a few days after a reboot and thus the site becomes slower. I'v added pictures of monit and munin, so maybe somebody can help me out... Monit: Munin:

    Read the article

  • Backup of images

    - by Sam Kong
    I've just installed a Ubuntu for a file server. It will share a folder (samba) and employees of my company will save photos on that. Currently the total amount of the photos is about 100GB and every day 20MB will be added. My question is about backup plan. I want to backup the photos to a remote server using a cron job. I can think of 2 things. rsync git Image files won't be changed so rsync will do. But as people say, I must git all my data. What would you do? Thanks. Sam

    Read the article

  • Mongo Scripting the shell

    - by cKendrick
    On my production stack, I have a front-end server and a Mongo server. I would like to be able to set a cron job on the front-end server to create some logs daily. I wrote a script that does this: ./mongo server:27017/dbname --quiet my_commands.js If I run it from the Mongo server as above, it works fine. However, I would like to be able to run it from the front-end server. When I try to do that, I get: -bash: mongo: command not found Since mongo is not installed on the front end server, it gives me that error. Is it possible to somehow bind mongo to my mongo on the Mongo server?

    Read the article

  • Looking for a Linux stream ripper that can be scheduled

    - by Anthony D
    I have an MP3 stream I want to schedule a recording of. I can do it using wget to a file, its just a straight mp3 stream. However I'd like to use a command line stream ripper that will do a better job. Any one know of one? Update 1 WGET is grabbing whatever part of the stream it comes in on. This may not really be the start of a frame in the MP3 file. Also, wget is not really schedule ready. I experimented with starting it with a cron job, then killing it later, this produced a file that didn't really start and stop where I wanted.

    Read the article

  • I'd like to archive files from Ubuntu to Windows between two computers on a shared home network

    - by Wabbitseason
    I have an old laptop running Ubuntu 9.10 which I use as a LAMP environment for web development, and I have a comfortable, powerful desktop computer with Windows 7 installed on it. These two are connected to a home router so both can access the internet. I have been able to set up Samba so I can mount my Apache home directory so it is accessible from Windows and is mapped as a network drive. What I'd like to do is access some Windows folders from Linux so I could automatically create backups (with cron scripts) of my work to physically different locations on the Windows box. Perhaps at a later time I'd set up a local Subversion repository but I'd love to keep backups of that on the Windows drives too. Using Ubuntu's Places/Network menu I can see my desktop but I'm unable to log in to that despite having created the corrent username and password on Windows. All I can get is the following error message: "Unable to mount location. Failed to retrieve share list from server." What could be misconfigurated?

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • Can expire_logs_days be less than 1 day in MySQL?

    - by Scott
    So... yesterday I received an "after the fact email" about a campaign that has started for one of the services that I run. Now the DB server is getting hammered, hard, to the tune of about 300mb/min in binary logging for the replicate. As you could imagine, this is chewing up space at a fairly tremendous rate. My normal 7 day expiry of binary logs just isn't cutting it. I've resorted to truncating logs to just the last for 4 hours with(I'm verifying that replication is up to date with mk-heartbeat): PURGE MASTER LOGS BEFORE DATE_SUB( NOW(), INTERVAL 4 HOUR); I'm just running that from cron every few hours to weather the storm, but it made me question the minimum value for expire_logs_days. I haven't come across a value that is less than 1, but that doesn't mean that it isn't possible. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days gives the type as being numeric, but doesn't indicate if it's expecting integers.

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >