Search Results

Search found 8979 results on 360 pages for 'backup sessions'.

Page 84/360 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • how to create a backup of a file once the current one is changed using php

    - by SkyLar
    I'm playing around with file info in php and i'm wondering if its possible to do the following: once a file is changed get the contents of the file, then put the contents of the file into another file. kinda like a "backup system" so once you make a change to the file a duplicate one is created, each change to the original file overwrites the duplicate one. i'm trying to do this with storing the time in the db, is this possible? i don't know what there is to test against? more clearly, i would like to do the following: execute a script once a specifeced file is changed on the server

    Read the article

  • Using memcached/APC for session storage?

    - by Industrial
    Hi everybody, I had some thoughts back ago about using memcached for session storage, but came to the conclusion that it wouldn't be sufficient in the event of one or more of the servers in the memcached pool were about to go down. A hybrid version is to save the main database (mySQL) from load caused by reads would be to work out a function that tries to fetch the data from the cache pool, and if that fails gets it from the database. After putting some more thought into it, I started to think about using APC cache for session related data. If our web server would go down, sessions would be lost either way, so storing them in a local APC or a localhost memcached server maybe isn't that bad? What's your experiences?

    Read the article

  • Cron job execute backup.bash

    - by leejava
    Dear all, I wish to let cron executes backup.bash, but when I try to create cron as below: */1 * * * * /var/www/mango_gis/delete_snapshot.bash /dev/null It didn't execute my script at all. Here is my script as below: #!/bin/bash get() { local pos=$1 shift eval 'echo ${'$pos'}'; } length(){ echo $#; } find_snapshots() { echo $(ec2-describe-snapshots | xargs -n1 basename); } snapshots=$(find_snapshots) len=$(length $snapshots) row_count=$(($len/6)) if(($row_count 6)); then delete_count=$(($row_count-6)) for (( i=1; i<=$delete_count; i++ )); do ec2-delete-snapshot $(echo $(get $((2+$((6*$(($i-1)))))) $snapshots)) /dev/null done fi Please advise... Leakhina

    Read the article

  • Want to improve my simple site/db backup script with auto removal of old backups.

    - by Robert Robb
    I have the following simple script for backing up my website files and db. The script is run each day via a cron job. #!/bin/sh NOW=$(date +"%Y-%m-%d") mysqldump --opt -h localhost -u username -p'password' dbname > /path/to/folder/backup/db-backup-$NOW.sql gzip -f /path/to/folder/backup/db-backup-$NOW.sql tar czf /path/to/folder/backup/web-backup-$NOW.tgz /path/to/folder/web/content/ It works great, but I don't want loads of old backups clogging my system. How can I modify the script to remove any backups older than a week when the script is run?

    Read the article

  • Should I stick only to AWS RDS Automated Backup or DB Snapshots?

    - by James Wise
    I am using AWS RDS for MySQL. With it comes on backup, I understand that amazon provides two types of backup - automated backup and database (DB) snapshot. The difference is explain in here - http://aws.amazon.com/rds/faqs/#23. However, I am still confuse if should I stick to automated backup only or both automated and manual (db snapshots). What do you think guys? What's the setup of your own? I heard to others that automated backup is not reliable due to some unrecoverable database when the DB instance is crashed so the DB snapshots are the way to rescue you. If I will do daily DB snapshots as similar settings to automated backup, I have gonna pay much bunch of bucks. Hope anyone could enlighten me or advise me the right set up. Thanks. James

    Read the article

  • An XEvent a Day (16 of 31) – How Many Checkpoints are Issued During a Full Backup?

    - by Jonathan Kehayias
    This wasn’t my intended blog post for today, but last night a question came across #SQLHelp on Twitter from Varun ( Twitter ). #sqlhelp how many checkpoints are issued during a full backup? The question was answered by Robert Davis (Blog|Twitter) as: Just 1, at the very start. RT @ 1sql : #sqlhelp how many checkpoints are issued during a full backup? This seemed like a great thing to test out with Extended Events so I ran through the available Events in SQL Server 2008, and the only Event related...(read more)

    Read the article

  • How can I install Ubuntu on my Nexus 7 while being able to recover from a nandroid backup?

    - by MagicFab
    I use CyanogenMod and ClockWork Recovery on my Nexus 7. How can my existing full nandroid backup be used to restore my device after installing Ubuntu? The instructions assume "recovery" would mean re-flashing the vanilla image, at factory, data-wiped condition. It would be useful to provide a .zip that can be flash via Clockwork (or other) recovery and back to whatever Nandroid backup there is - much as any other ROM is provided/used.

    Read the article

  • How can I install Ubuntu on my Nexus 7 while being able to recover from an Nandroid backup?

    - by MagicFab
    I use CyanogenMod and ClockWork Recovery on my Nexus 7. How can my existing full nandroid backup be used to restore my device after installing Ubuntu? The instructions assume "recovery" would mean re-flashing the vanilla image, at factory, data-wiped condition. It would be useful to provide a .zip that can be flashed via Clockwork (or other) recovery usin ROM Manager or booting into recovery and back to whatever Nandroid backup there is - much as any other ROM is provided/used.

    Read the article

  • WCF Reliable Session Timeout

    - by RemotecUk
    Hi, when do reliable sessions time out? My session class is defined as follows: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Multiple)] and in my app.config... <bindings> <netTcpBinding> <binding name="FTS_netTcpBinding"> <reliableSession enabled="true" inactivityTimeout="00:00:30"/> </binding> </netTcpBinding> </bindings> I have put a timer in the constructor of my session class that simply outputs a count (1..2..3...) to the console for every second the session is active. I have tested it so far by faulting my channel. I would have imagined that the session class would have died after ~30 seconds (as specified in my inactivityTimeout parameter) and hence the timer would have died. However it was still going after a minute. Each session on my app will have significant resources so I need to make sure that they are cleaned up when something goes wrong. Thanks.

    Read the article

  • Making a database backup to SDCard on Android

    - by Pentium10
    I am using the below code to write a backup copy to SDCard and I get java.io.IOException: Parent directory of file is not writable: /sdcard/mydbfile.db private class ExportDatabaseFileTask extends AsyncTask<String, Void, Boolean> { private final ProgressDialog dialog = new ProgressDialog(ctx); // can use UI thread here protected void onPreExecute() { this.dialog.setMessage("Exporting database..."); this.dialog.show(); } // automatically done on worker thread (separate from UI thread) protected Boolean doInBackground(final String... args) { File dbFile = new File(Environment.getDataDirectory() + "/data/com.mypkg/databases/mydbfile.db"); File exportDir = new File(Environment.getExternalStorageDirectory(), ""); if (!exportDir.exists()) { exportDir.mkdirs(); } File file = new File(exportDir, dbFile.getName()); try { file.createNewFile(); this.copyFile(dbFile, file); return true; } catch (IOException e) { Log.e("mypck", e.getMessage(), e); return false; } } // can use UI thread here protected void onPostExecute(final Boolean success) { if (this.dialog.isShowing()) { this.dialog.dismiss(); } if (success) { Toast.makeText(ctx, "Export successful!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(ctx, "Export failed", Toast.LENGTH_SHORT).show(); } } void copyFile(File src, File dst) throws IOException { FileChannel inChannel = new FileInputStream(src).getChannel(); FileChannel outChannel = new FileOutputStream(dst).getChannel(); try { inChannel.transferTo(0, inChannel.size(), outChannel); } finally { if (inChannel != null) inChannel.close(); if (outChannel != null) outChannel.close(); } } }

    Read the article

  • PHP cookies in a session handler

    - by steve
    I have run into a very interesting problem trying to debug my custom php session handler. For some reason unknown to me I can set cookies all the way through the session handler right up until the very start of the write function. As far as I know session handler calls go in this order. open - read - write - close The open function sets a cookie just fine. function open($save_path,$session_name) { require_once('database.php'); require_once('websiteinfo.php'); mysql_connect($sqllocation,$sql_session_user,$sql_session_pass); @mysql_select_db($sql_default_db); date_default_timezone_set('America/Los_Angeles'); setcookie("test","test"); return TRUE; } The read function can set a cookie right up until the very moment it returns a value. function read($session_id) { $time = time(); $query = "SELECT * FROM 'sessions' WHERE 'expires' > '$time'"; $query_result = mysql_query($query); $data = ''; /* fetch the array and start sifting through it */ while($session_array = mysql_fetch_array($query_result)) { /* strip the slashes from the session array */ $session_array = $this->strip($session_array); /* authenticate the user and if so return the session data */ if($this->auth_check($session_array,$session_id)) { $data = $session_array['data']; } } setcookie("testcookie1","value1",time()+1000,'/'); return $data; } The very first line of the write function is setting another cookie and it cannot because the headers are already sent. Any ideas anyone? Thanks in advance

    Read the article

  • Sessions enabled - do we have to clean them up ourselves?

    - by user246114
    Hi, When we turn sessions on in google app engine like: // appengine-web.xml <sessions-enabled>true</sessions-enabled> does app engine automatically clean up expired sessions, or do we have to do it ourselves? After turning them on, I see in the datastore that some entries are being generated like _ah_session, I'm wondering if those are them? Thanks

    Read the article

  • How to run 2 X sessions with different display managers?

    - by ved2254
    I read about the virtual terminals a little and that gave me an idea. I searched for a way to have two X sessions simultaneously. I had a look at these sites : 1. How to run multiple user X sessions on the same computer, at the same time? 2. How to drag windows between 2 X servers? I tried startx -- :1 but my earlier terminal (on Ctrl+Alt+F7) hung up. How do I ensure this does not happen? My main need is to get Unity on :0 and Gnome-shell on :1 and switch between them like workspaces. As per 2., switching is not recommended as it is not easy. But if I get at least 1., then I'd be happy. I have Ubuntu 12.04 64bit.

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • Time Machine for Windows?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Ajax progress with PHP session

    - by FFish
    I have an app that processes images and use jQuery to display progress to the user. I done this with writing to a textfile each time and image is processed and than read this status with a setInterval. Because no images are actually written in the processing (I do it in PHP's memory) I thought a log.txt would be a solution, but I am not sure about all the fopen and fread's. Is this prone to issues? I tried also with PHP sessions, but can't seem to get it to work, I don't get why.. HTML: <a class="download" href="#">request download</a> <p class="message"></p> JS: $('a.download').click(function() { var queryData = {images : ["001.jpg", "002.jpg", "003.jpg"]}; $("p.message").html("initializing..."); var progressCheck = function() { $.get("dynamic-session-progress.php", function(data) { $("p.message").html(data); } ); }; $.post('dynamic-session-process.php', queryData, function(intvalId) { return function(data) { $("p.message").html(data); clearInterval(intvalId); } } (setInterval(progressCheck, 1000)) ); return false; }); process.php: // session_start(); $arr = $_POST['images']; $arr_cnt = count($arr); $filename = "log.txt"; for ($i = 1; $i <= $arr_cnt; $i++) { $content = "processing $val ($i/$arr_cnt)"; $handle = fopen($filename, 'w'); fwrite($handle, $content); fclose($handle); // $_SESSION['counter'] = $content; sleep(3); // to mimic image processing } echo "<a href='#'>download zip</a>"; progress.php: // session_start(); $filename = "log.txt"; $handle = fopen($filename, "r"); $contents = fread($handle, filesize($filename)); fclose($handle); echo $contents; // echo $_SESSION['counter'];

    Read the article

  • sybase bcp import fails

    - by chromeplatedbanana
    We're trying to export some tables from our production database to our test database using bcp. The bcp export seems to work fine, but the import always fails with a data type error (see below). We tested on our test database exporting the table content to a file, then importing it in again immediately, but that failed too. e.g., bcp TABLENAME out ~/tempfile -S servername -U username generates a file as expected. If we use -c option then the number of lines is as expected. However, bcp TABLENAME in ~/tempfile -S servername -U username fails with CTLIB Message: - L0/0D/S0/N0/0/0: blk_int(): blk_layer: CT library error: Cannot find an equivalent CS_TYPE for this TDS data type 49 blk_init failed. We get this whenever we try to copy into TABLENAME, whether from the production or test table dump file. I don't understand why export and import for the same TABLENAME is generating a data type error. What am I doing wrong here? Thanks

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • export shared services from MOSS

    - by vittocia
    Hello, using the stsadm command I have been able to export a MOSS website and restore it on a different server which works fine. I tried the same for the shared services, it gave no errors, but it does not have all the import connections when I check around. Is there a better way to export and restore shared services, or a way to synch the import connections and user list?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >