Search Results

Search found 192 results on 8 pages for 'cronjob'.

Page 7/8 | < Previous Page | 3 4 5 6 7 8  | Next Page >

  • phpMyAdmin Cron to Delete Temporary Files

    - by JoeC
    I have a folder on my hosting which I periodically upload something to - /public_html/uploads - and I'd like to set up a cronjob through phpMyAdmin to empty it out on a regular basis. The current cron I have in pMA is find /public_html/uploads -maxdepth 1 -ctime 1 -exec rm -f {} \; (Ignore the fact that it's running every minute for now, it's so I can test it :) ) I know very little about what this command is actually doing, but it looks like "not very much". Can anyone help me fix it? :) Thanks.

    Read the article

  • Starting an http request, but dropping out if no response after a certain time

    - by nbv4
    I'm trying to write a python script that does the following from within a minutely cronjob: tries to execute a url after 10 seconds if there is no response yet, abandon the response and immediately issue a command via os.system to restart the webserver. The problem is that when my server crashes, it doesn't return a response at all. If I were to just have my script time the response, the script will go on for 10 minutes or more. I want it to issue the restart immediately once it detects a slow response. I know such a script could be written in probably less than 5 mines of code, but I have no idea how to go about it.

    Read the article

  • Perl Email package

    - by aidan
    I'm looking for a simple (OO?) approach to email creation and sending. Something like $e = Email->new(to => "test <[email protected]>", from => "from <[email protected]>"); $e->plain_text($plain_version); $e->html($html_version); $e->attach_file($some_file_object); I've found Email::MIME::CreateHTML, which looks great in almost every way, except that it does not seem to support file attachments. Also, I'm considering writing these emails to a database and having a cronjob send them at a later date. This means that I would need a $e->as_text() sub to return the entire email, including attachments, as raw text which I could stuff into the db. And so I would then need a way of sending the raw emails - what would be a good way of achieving this? Many thanks

    Read the article

  • How to automate rsync without asking for password prompt

    - by Rijosh K
    i would like to automate the rsync task as a cron job.. since it needs the passphrase i am not able to do the cronjob. i need to specify the passphrase along with the rsync command or i will store the passphrase in a file and i will read from it. my command will look like this.. rsync -aPe "ssh -i ' . $server-{'ssh_key'} . '" ' . $server_lock_dir; so where do i put the password .. pls help

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • PowerShell: New-PSDrive error handling

    - by mazebuhu
    Hello, I have a script where I mount with the command "New-PSDrive" a network drive. Now, since the script is running as a "cronjob" on a server I want to have some error detection. If for any reason the command New-PSDrive fails the script should stop executing and notify that something went wrong. I have the following code: Try { New-PSDrive -Name A -PSProvider FileSystem -Root \\server\share } Catch { ... handle error case ... } ... other code ... For testing reasons I specified a wrong server name and I get the following error "New-PSDrive : Drive root "\wrongserver\share" does not exist or it's not a folder". Which is OK since the server does not exists. But the script does not go into the Catch clause and stop. It happily continues to run and ends up in a mess since no drive is mounted :-) So my question, why? Is there any difference in Exception handling in PowerShell? I should also note that I'm a noob in PowerShell scripting. Bye, Martin

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Port Forwarding failing only to Ubuntu servers from Draytek router

    - by Rufinus
    I know this is a kinda unusal question, but Draytek support (..which is very eager to solve the issue) seems to reach its limits. Scenario: Draytek Vigor Multiwan router with current firmware. Multiple WAN IP Aliases on one of the wan ports DMZ (or port forwarding doesnt matter) from wan ip alias to internal host currently i have two internal hosts: 192.168.0.51 (Ubuntu) 192.168.0.53 (Debian) both should be accessible from outside via one of the wan ip aliases. both are accessible with their internal ip's at all times (!) If the router gots restartet, both external ips are forwarding to its internal hosts. But after a few minutes up to 2 hours, the ubuntu host is no longer reachable via its external interface. The debian hosts on the other hand is reachable. In what does ubuntu differs from debian ? I know at least of one user with the exact same problem. see http://ubuntuforums.org/showthread.php?p=10994279 Any ideas ? TIA EDIT: via ping diagnostics directly on vigor, 192.168.0.53 is pingable, 192.168.0.51 is not. but both hosts are perfectly reachable from anywhere inside the network. if i restart ubuntu networking it works again for a short time.... i'm out of ideas.. EDIT 2: after further investigation, i noticed a ping from .51 to the network (or a host in the internet) is enough to make the port-forwarding working again. So i will add an Cronjob as a "keep-alive" ping. This will solve the problem, but the reason for this behaivor is still in the dark. Thanks to all commentors.

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • mysqldump skip one table

    - by danneth
    I'm running a cronjob to backup our system using mysqldump. The database contains 90 or so tables. One of these tables is HUGE and every once in a while causes the dump to fail. From the manual I see that you can specify specific tables to dump shell> mysqldump [options] db_name [tbl_name ...] This got me thinking. What if I have two jobs, one for dumping the huge table, and one for all the others. To accomplish this it would be nice if I could to something like shell> mysqldump -u backupuser -p database huge_table > db_huge_table.sql shell> mysqldump -u backupuser -p database --skip huge_table > db_rest.sql Unfortenately I'm not seeing such and option. I could of course explicitly state the 90 tables, but that just seems like a mess. Another option would be a script of some sort, but before checking that route I'll try this resource. MySQL is 5.1.61 on CentOS 6.2

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • VMWare server: virtual machine start-up reaches 95% and hangs

    - by Magsol
    This problem cropped up today, after updating my eVGA motherboard's chipset in order to try and fix an unrelated issue. After installing the chipset update (contained SATA and Ethernet drivers), every time I've tried to start my Ubuntu VM, it reaches 95% in the web interface and then just hangs. I'm using VMWare Server 2.0.2, running it within Windows 7 64-bit. I haven't had any issues up until now, and I suspect it has something to do with the chipset update. I've already tried reinstalling VMWare itself, removing the VM from the inventory and re-adding it, and neither has proved successful. I'm also not sure how to kill the VMware server process itself once the start-up hangs; I've only been able to try again by rebooting (as none of the VMWare Services listed kill the server process itself). Any insights? Edit: Uhhh...as an addendum: I have a cron job set up on my Ubuntu VM that runs every 20 minutes. The VM is still listed in the VMWare web interface as at 95% of startup, and the start/stop buttons are still disabled, but the cronjob just ran. I also tested SSH, and I was able to tunnel into the Ubuntu VM as well. Now I'm really confused. Edit #2: I just started a thread on the VMWare Server support forums on this same topic. Hopefully between the two communities, we can come up with an answer: http://communities.vmware.com/thread/251033 Edit #3: In lieu of a specific fix, I've switched over to VirtualBox, and all is working just fine.

    Read the article

  • PHP Site Deployment Suggestion

    - by TheOnly92
    I'm currently quite troubled by the way of deployment my team is adopting... It's very old-fashioned and I know it doesn't work very well. But I don't exactly know how to change it, so please give some suggestions about it... Here is our current setup: 2 webservers 1 database server 1 test server Current deployment adaptation 1. We develop and work on the test server, every changes is uploaded manually to the test server. 2. When a change or feature is complete, we then commit the changes to SVN repository. 3. After committing the changes, we then upload our changes to the first webserver, where there will be a cronjob running every minute to sync the files between the servers. Something very annoying is, whenever we upload a file just as the syncing job starts, the file that is sync-ed will appear corrupted, since it is only half-uploaded. Another thing is whenever there is a deployment fault, it will be extremely difficult to revert. These are basically the problem I'm facing, what should I do?

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • FQL query using stream table doesn't accept app access token

    - by tougher
    I've searched stackoverflow all day long to find an answer without luck, so here we go. I'm trying to fetch data from the stream table like this: FQL: SELECT post_id, message, created_time FROM stream WHERE source_id = 131559313586863 URL: http:// graph.facebook.com/fql?q=SELECT+post_id%2C+message%2C+created_time+FROM+stream+WHERE+source_id+%3D+131559313586863&access_token=10669xxxxx74470|PF-7GSdBx0Nxxxxxkdi1KwSQG-w But I get a 400 Bad Request as response with the error message: "An access token is required to request this resource.". I'm fetching an application access token with this url: https:// graph.facebook.com/oauth/access_token?client_id=FACEBOOK_APP_ID&client_secret=FACEBOOK_APP_SECRET&grant_type=client_credentials Facebook state in this blog post that "You will need to pass a valid app or user access token to access this functionality.". Functionality refers to /feed and /posts (the stream table). Futhermore this wiki tells the same story about using the stream table, "From June 3 2011 a token is required to query this table. You can use any application or user token to make the query.". Does anyone see my hopefully obvious flaw? Please note: The profile in the FQL query is public. I need this to run userless though a cronjob. No user interaction is possible. The request works if I replace the app access token with my own user token from https:// developers.facebook.com/tools/explorer Sorry for breaking the URL's. I need more reputation to post more than 2 links :-/

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • Application logic for invoicing and subscriptions?

    - by Industrial
    Hi everyone, We're just in the planning stage of a web app that offers subscriptions to our customers. The subscription periods varies and can be prolonged indefinitely by our customers, but are always at least one month (30 days). When a customer signs up, the customer information (billing address, phone number and so on) are stored in a customers table and a subscription is created in the subscriptions table: id | start_date | end_date | customer_id -------------------------------------------------------- 1 | 2010-12-31 | 2011-01-31 | 1 Every month we'll loop through the subscriptions table (cronjob preferably) and create invoices for the past subscription period, which are housed in their own table - invoices. Depending on the customer, invoices are manually printed out and sent by mail, or just emailed to the customer. Due to the nature of our customers and the product, we need to offer a variety of different payment alternatives including wire transfer and card payments, hence some invoices may need to be manually handled and registered as paid by our staff. The 15th every month, the invoices table are looped through and if no payment has been marked for the actual invoice, the according subscription will be removed. If there's a payment registered, the end_date in the subscriptions table is incremented by another 30 days (or what now our period our customer has chosen). Are we looking at headaches by incrementing dates forwards and backwards to handle non-paying customers and extending subscriptions? Would it be a better idea to add new subscriptions as customers extends their subscription?

    Read the article

  • Change timer intervall in windows service

    - by AyKarsi
    I have timer job inside a windows service, for which the intervall should be incremented when errors occur. My problem is that I can't get the timer.Change Method to actually change the intervall. The "DoSomething" is always called after the inital interval.. This is probably something simple .. Code follows: protected override void OnStart(string[] args) { //job = new CronJob(); timerDelegate = new TimerCallback(DoSomething); seconds = secondsDefault; stateTimer = new Timer(timerDelegate, null, 0, seconds * 1000); } public void DoSomething(object stateObject) { AutoResetEvent autoEvent = (AutoResetEvent)stateObject; if(!Busker.BitCoinData.Helpers.BitCoinHelper.BitCoinsServiceIsUp()) { secondsDefault += secondsIncrementError; if (seconds >= secondesMaximum) seconds = secondesMaximum; Loggy.AddError("BitcoinService not available. Incrementing timer to " + secondsDefault + " s",null); stateTimer.Change(seconds * 100, seconds * 100); return; } else if (seconds > secondsDefault) { // reset the timer interval if the bitcoin service is back up... seconds = secondsDefault; Loggy.Add ("BitcoinService timer increment has been reset to " + secondsDefault + " s"); } // do the the actual processing here }

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Apache server completely freezes until it gets restarted

    - by nbv4
    My server does this every few days. What sucks is that it always seems to do this right after I go to bed, so when I wake up, I'm greeted with the fact that my server has been down for the past 6 or 7 hours. When I first noticed this, I added a cronjob that tries to restart the server every 15 minutes, but I guess that didn't fix it. Once I noticed the server was down, I can this command: /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName ... waiting ...........................................................apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName httpd (pid 17597) already running ...which is odd, because a restart should restart the server, even if it's already running, correct? I eventually had to "stop" then "start" to get it working again. I then looked through the logs, and found something very weird. It seems that around the time the server crashed, the logs have entries that are wildly out of order. It looks a little like this: xx.xxx.xxx.x - - [21/Apr/2010:06:32:05 -0400] "GET / blah" xx.xxx.xxx.x - - [21/Apr/2010:06:51:25 -0400] "GET / blah" x.xx.xxx.xxx - - [21/Apr/2010:06:38:23 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:31:56 -0400] "GET / blah" xxx.xx.xx.xx - - [21/Apr/2010:06:51:49 -0400] "GET / blah" xx.xx.xxx.xx - - [21/Apr/2010:06:33:20 -0400] "GET / blah" I don't think the problem is memory, because this: tells me that right before the crash, memory usage is fine. I'm running apache with the worker mpm, here are the settings for that: <IfModule mpm_worker_module> StartServers 1 MaxClients 100 MinSpareThreads 5 MaxSpareThreads 10 ThreadsPerChild 10 MaxRequestsPerChild 3000 </IfModule> This apache server is running a bunch of stuff, but most of the traffic comes from a django project I'm hosting, that uses mod_wsgi. There also is a simple machines forum that is running off of mod_fcgid. Those setting are below: <IfModule mod_fcgid.c> MaxRequestsPerProcess 500 MaxProcessCount 3 AddHandler fcgid-script .php .fcgi AddHandler cgi-script .cgi .pl FCGIWrapper "/usr/bin/php-cgi" .php </IfModule> Anyone know of anything else I can check? I've just about tweaked every single setting I can think of, yet these freezes still happen.

    Read the article

< Previous Page | 3 4 5 6 7 8  | Next Page >