Search Results

Search found 1134 results on 46 pages for 'cron'.

Page 26/46 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • FTP upload for a PHP file hosting site, how to connect ProFTPD to mysql database?

    - by Igor
    I'm running a file upload service and users have requested to have FTP upload features Basically, I need to allow users to login, via FTP, to an FTP daemon (say, proFTPd) and they should be able to use their username and password (stored in a mysql database) to login there After logging in, I'll take care of the files with a cron job I'm stuck on how to make proftpd get users and passwords from my database..any ideas?

    Read the article

  • How can I make an alias expand to a list of recipients returned by a command?

    - by Frerich Raabe
    I have an rarely used /etc/aliases entry vmailusers: :include:/usr/local/etc/vmailusers The /usr/local/etc/vmailusers file is generated by a cronjob executing ls /home/vmail | grep -v lists > /usr/locale/etc/vmailusers chmod 0640 /usr/local/etc/vmailusers chmod mailnull:mail /usr/local/etc/vmailusers Is there a way to avoid having to run a cron job but rather execute the ls command in the very moment the vmailusers alias is used?

    Read the article

  • Method to calculate downtime percentage

    - by Chris
    I need a calculation to work out the downtime percentage of a server. I am making a script that runs via cron every minute to check the uptime of a remote server. The two values I have to play with are number of checks run and times the checks failed (outages). Is this a plausible way of calculating it? I am thinking it must be but can't be too sure as my Maths skills are slipping away from me with age!

    Read the article

  • ls hangs after NFS server reboot

    - by Apikot
    I've got server A and server B. B acts as an nfs server, A mounts from B. Both are running on EC2. Sometimes I have to shut down B and start a new instance (identical instance). After B is back up, trying to do anything inside the mounted directory on A (ls for example) just hangs. I'm trying to set up a cron that checks the status of the mount, and remounts if anything is wrong. Is there any way to check the status of a mount?

    Read the article

  • monitoring service to detect when email is not received

    - by DGM
    I would like to monitor an email server - not whether the port is open and receiving, but rather that a "canary" message sent every so often actually arrives somewhere else. I have had a problem with a server getting firewalled off and no one noticing that cron jobs are not coming from the machine for a few weeks. Of course, the machine itself cannot send out a notification if it is having problems, so this requires an outside service. Any ideas?

    Read the article

  • Getting SEC to only monitor latest version of a log file?

    - by user439407
    I have been tasked with running SEC to help correlate PHP logs. The basic setup is pretty straightforward, the problem I'm having is that we want to monitor a log file whose name contains the date(php-2012-10-01.log for instance). How can I tell SEC to only monitor the latest version of the file(and of course switch to the newest log file every day at midnight) I could do something like create a latest version of the file that links to the latest version and run a cron job at midnight to update the link, but I am looking for a more elegant solution

    Read the article

  • Executing a command in vim from a commandline

    - by TK
    I would like to run :helptags ~/.vim/doc in vim, but from command line. The purpose is to run the command occasionally with other commands to keep my tools up-to-date (probably in a cron job on my development machine. I looked around man vim, but cannot figure out what option I need to pass. I think this is a general question for vim, but I'm using Mac and Ubuntu for the development.

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • file transfer automation

    - by rizen
    Server A generates a file and scp's it to Server B. I have cron running on Server B that each minute looks for new files that were copied over. My question is- how can Server B ensure that the file that was copied over is actually done being copied? I don'y want to start processing the file unless it's been fully written to. Is this possible to determine?

    Read the article

  • Linux: Create files and direcotires but not delete them

    - by Peraz
    I have a process that create directories and files inside a working directory, ex: /workingdir/file1 /workingdir/file2 /workingdir/dir1/file1 /workingdir/dir1/dir2/file1 /workingdir/dir1/file2 I need to avoid deletion/overwrites of created folders/files for that user, but allow subsequent folders/subfolders/files creation. I try permissions, gid, acl with no luck. What is the correct way to do that ? (i can use a cron job if needed)

    Read the article

  • Shell script for daily disk usage report

    - by Master
    I am doing backups on my local drives. The drives are mounted in /media folder. Now i want to run cron job daily which will tell in table format how much disk is used by folder and how much free space is left on drive It would be good if i can insert that info in database and i can see that info use webpage on locahost ubuntu 10

    Read the article

  • `find` command not available in web host, how to implement a delete based on modification time using other commands?

    - by CalumJEadie
    I'm creating a simple datebase backup solution for a client using web hosting at DataFlame. The web hosting account provides access to cron but not a shell. I have a database backup script creating regular backups and I want to automatically remove those more than N days old. I attempted to use find -v $backup_dir -mtime +$keep_days -name "*db.tar.gz" -delete however the user executing the script does not have permission to run find. Can you suggest how to implement this without using the find command?

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • How can videos from Ubuntu drive appear on web page and play in an embedded Flash Player?

    - by nLinked
    I have a shared folder on Ubuntu Server 12.04. Users drop videos in. Ubuntu runs a Cron task to convert them to FLV format and put them into another folder. All I would like is a simple web page that displays all the files in that folder as links, and when you click on the link for the video you want to play, it should play in the same web page inside an embedded Flash/SWF player. Sounds really basic but I'm actually struggling to find a simple solution. Any thoughts appreciated!

    Read the article

  • How to keep "dot files" under version control?

    - by andrewsomething
    Etckeeper is a great tool for keeping track of changes to your configuration files in /etc A few key things about it really stand out. It can be used with a wide variety of VCSs: git, mercurial, darcs, or bzr. It also does auto commits daily and whenever you install, remove or upgrade package. It also keeps track of file permissions and user/group ownership metadata. I would also like to keep my "dot files" in my home directory under version control as well, preferably bazaar. Does anyone know if a tool like etckeeper exists for this purpose? Worst case, I imagine that a simple cron job running bzr add && bzr ci once or twice a day along with adding ~/Documents, ~/Music, ect to the .bzrignore Anyone already doing something similar with a script? While I'd prefer bazaar, other options might be interesting.

    Read the article

  • Unmounted disk still spins up regularly

    - by Erik Johansson
    I just added a disk, with partitions but none of them are mounted. The disk will still spin up every now and then. it goes like this: ### disk spins up hdparm -Y /dev/sdb;date /dev/sdb: issuing sleep command 9 feb 2011 23.37.08 CET ### disk spins up hdparm -Y /dev/sdb;date /dev/sdb: issuing sleep command 9 feb 2011 23.46.12 CET Also it always spins up when I shut down the computer. Any tips are welcome, e.g. how can I figure out which process is accessing the disk, are there any daemons doing this? I know it isn't a cron job.

    Read the article

  • AWStats: cannot access /var/log/apache2/access.log

    - by Joril
    I installed awstats on my new Ubuntu Lucid server, but when cron tries to run it as user www-data, it complains that cannot access /var/log/apache2/access.log: Permission denied. In /usr/share/doc/awstats/README.Debian there's this paragraph: By default Apache stores (since version 1.3.22-1) logfiles with uid=root and gid=adm, so you need to either... 1) Change the rights of the logfiles in /etc/logrotate.d/apache so that www-data has at least read access. 2) As 1) but change to a specific user, and use the suEXEC feature of Apache to run as same user (and either change the right of /var/lib/awstats as well or use another directory). This is more complicated, but then the logs are not generally accessible to the server (which was probably the point of the Apache default). 3) Change awstats.pl to group adm (but beware that you are then taking the risk of allowing a CGI-script access to admin stuff on the machine!). I'd go with 1, but what are the recommended permissions to grant?

    Read the article

  • Unable to add host running ubuntu for nagios monitoring?

    - by karthick87
    I am unable to add ubuntu server in nagios monitoring. I am getting "CHECK_NRPE: Socket timeout after 40 seconds." error for few services "CPU Load, Cron File Check, Current Users, Disk Check, NTP Daemon, Time Check, Total Processes, Zombie Processes". Please find the snapshot for the same below, Details: Installed nrpe plugin in ubuntu host. On running the below command from remote host running ubuntu (not nagios server) am getting the following output, root@ubuntu-cacher:~# /usr/local/nagios/libexec/check_nrpe -H localhost NRPE v2.13 But in nagios server i am getting "CHECK_NRPE: Socket timeout after 40 seconds." error. Additional Information: Am running nrpe under xinetd, when i execute the following command i dont get any output, root@ubuntu-cacher:~# netstat -at | grep nrpe But getting the following output when checking, root@ubuntu-cacher:~# netstat -ant|grep 5666 tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN tcp 0 0 172.29.*.*:5666 172.29.*.*:33693 ESTABLISHED tcp 0 0 172.29.*.*:5666 172.29.*.*:33692 ESTABLISHED

    Read the article

  • Why does my CD Tray keep popping open?

    - by Anton
    I have Ubuntu server - Ubuntu 10.04.3 LTS and I can't figure out why my CD tray is opening all the time. I have looked into /var/log/auth.log, cron list and found nothing. The eject command closes tray and then it gets opened again. The server has a LAMP (Linux-Apache-MySQL-PHP) setup and I can't afford to restart it now. How can I find out who or which program is popping the tray open? Which programs can cause this behavior?

    Read the article

  • How will Deja-Dup operates when backing up to an external USB drive?

    - by Little Bobby Tables
    I want to set up regular backups, and deja-dup seems like a nice tool. However, I want to put my backups on an extension USB drive that I have, not on a remote network location. Naturally, this drive is not always connected. If I configure deja-dup to backup to a directory on this drive (e.g. /media/extention/backup), what would happen? Will it prompt me to connect the drive when it is missing (the desired behavior), or just fail silently? Is there some way to tweak it to do so? I can roll my own cron-based backup script that checks if this drive is mounted, but I would really prefer to use an existing, integrated tool.

    Read the article

  • Is there a taskbar applet to show the status of a remote host?

    - by Mathew
    At the end of the day I would like to be able to copy files to my home PC just in case I feel inspired to work on them in the evening. But I only want to do this if the PC is on already. (I can remote wake-on-lan the PC but I don't want to always be doing that). I would like some taskbar applet that shows the status of the PC and whether I can ssh into it or not. Obviously it would also be interesting to have an idea as to how long it is on for whilst I am at work as that gives a good indication of whether anyone is in or not. However being able to unobtrusively copy files to the remote machine is the main objective. Perhaps another approach is to run rsync on cron and if the remote host is not up then I guess it will fail. Is that correct? If anyone else has ideas on how to best sync a work and home PC then please do tell.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >