Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 67/230 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • How to move partition with Users to a new SATA controller?

    - by Al Kepp
    Our current situation: X79 chipset. Windows 7 installed on SATA SSD running on Marvell controller. C:\Users contains just Public and computer-name folders, all the other users are created in E:\Users. That partition is on two HDD's running on an Intel RAID controller. (The motherboard has got two SATA/RAID onboard controllers.) The goal is: Without reinstallation, I want to move SSD to a standard SATA controller. Then I want to get rid of RAID1, and move one HDD to the same internal Intel SATA controller and the current E:\Users move to that new place. I want it to stay as E:\users, but I need to reinstall the HDD to let it work in SATA mode without RAID. So I face several problems. I am sure all are solvable with free software utilities, but I don't know how exactly to do it. I can see the particular problems: I have got all users at E:\Users. When I turn off that E:\ disk, I won't be able to login to Windows. I need at least one admin account to be placed back at C:\users. The current C: runs on standard 120 GB SSD, but it is connected to a Marvell SATA/RAID controller. I am affraid the Windows won't let me put it to Intel SATA controller due to hardware/licence check, and I won't be able to use standard W7 recovery disk either, because there probably isn't marvel SATA/RAID driver on it. I haven't tried anything yet, because I am affraid I can end up with computer not working at all. (I want to move it to a standard ICH10 Intel SATA controller to let us have no problems in future with it. I think it is not very safe when we use any nonstandard hardware to boot the computer.) I need to somehow backup current E:\ disk and restore it to a new E:. I hope this will be the easy part (as long as the admin account will reside on C:). The E: RAID array is very large, but it is almost empty (less than 100 GB of data.) So I can make the partition smaller so it can easily fit to a single SATA HDD.

    Read the article

  • Daily Backups for a single table in Microsoft SQL Server

    - by James Horton
    Hello, I have a table in a database that I would like to backup daily, and keep the backups of the last two weeks. It's important that only this single table will be backed up. I couldn't find a way of creating a maintenance plan or a job that will backup a single table, so I thought of creating a stored procedure job that will run the logic I mentioned above by copying rows from my table to a database on a different server, and deleting old rows from that destination database. Unfortunately, I'm not sure if that's even possible. Any ideas how can I accomplish what I'm trying to do would be greatly appreciated. Thank you.

    Read the article

  • Marking Changes to database...

    - by KoolKabin
    Hi guys... I am developing application to be run in central server and distributed computers. I am supposed to write application to backup the data from distributed machines and merge it in central server. I thought of compressing whole local database and sending it to server for merging. But as the database size grows the size of compress file also began to grow. So is there any way to merge data in central server without sending whole database. I need to do it on daily basis. Daily take backup and send to server

    Read the article

  • Backing Up Database from Remote Server to Local in VB.NET

    - by Pradeep
    Hi, I am making a VB.NET application that can download/backup the database that is currently on a remote server. I have Remote Server IP,Username,Password and Database name. I am also able to connect to it. But i don't know what to do after connecting to it. I don't know what all files are need to be backed up. ( i think database and log file both must be backed up, i am not sure ) please let me know that basic commmands that i will need to backup the whole database. Thanks in advance.

    Read the article

  • mySQL database password change now crashes Joomla.

    - by casim
    I have a mySQL database behind a Joomla install. I changed the database password because I forgot it but now Joomla crashes looking for the database. I guess joomla has the password written somewhere - if anyone knows I might be able to manual edit it and enter the new database password. Otherwise I'm hoping a manual install of a backup of the original database will work. I need to know does a backup include the database password. If yes, will reinstating my original database solve the problem for me by reverting the system back to it's original password? Please help. thx, s.

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • What's the best practice for taking MySQL dump, encrypting it and then pushing to s3?

    - by HalogenCreative
    This current project requires that the DB be dumped, encrypted and pushed to s3. I'm wondering what might be some "best practices" for such a task. As of now I'm using a pretty straight ahead method but would like to have some better ideas where security is concerned. Here is the start of my script: mysqldump -u root --password="lepass" --all-databases --single-transaction > db.backup.sql tar -c db.backup.sql | openssl des3 -salt --passphrase foopass > db.backup.tarfile s3put backup/db.backup.tarfile db.backup.tarfile # Let's pull it down again and untar it for kicks s3get surgeryflow-backup/db/db.backup.tarfile db.backup.tarfile cat db.backup.tarfile | openssl des3 -d -salt --passphrase foopass |tar -xvj Obviously the problem is that this script everything an attacker would need to raise hell. Any thoughts, critiques and suggestions for this task will be appreciated.

    Read the article

  • Firebird .NET: Database backup not working (small file)

    - by Norbert
    Hi, I am trying to backup my Firebird 2.5 database file by code: FbBackup backupSvc = new FbBackup(); backupSvc.ConnectionString = MyConnectionManager.buildConnectionString(); backupSvc.BackupFiles.Add(new FbBackupFile(backupPathFilenameAndExtension, 2048)); backupSvc.Verbose = true; backupSvc.Options = FbBackupFlags.IgnoreLimbo; backupSvc.Execute(); The database gets saved to the specified directory. However, the file saved file is only 168kB large. The original database is nearly 7MB in size. What goes wrong? Thanks, Norbert

    Read the article

  • SSAS 2008 backup/restore fails with a GetOverlappedResult 'Insufficient system resources exist to co

    - by Anant Aneja
    Hi, On my SSAS 2008 instance if a backup/restore of any database is made to/from a UNC path I get an error : The following system error occurred from a call to GetOverlappedResult for Physical file: '\server\share\OLAPDB.abf', Logical file: '' : Insufficient system resources exist to complete the requested service. . Server: The operation has been cancelled. (Microsoft.AnalysisServices) Creating/copying/moving a file on the share of any size on the share using explorer or the command prompt works. The most useful link I could find is : http://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.kernel/2004-07/0475.html Can anyone shed more light on what could be causing this error ? (I've posted the same question on the SSAS forums - just a heads up)

    Read the article

  • How do you backup your localhost ?

    - by justjoe
    i have method to backup my work on localhost based on week basis. i use multipe dos command and save in on a bat file. i use command such as copy and xcopy and save my localhost to another place. After my server grow larger, i think it take too much space. So tehre is a way to solve this problem ? maybe a software that can track changes on our php code. EDIT : I use windows xp sp2, on XAMPP Apache PHP 5.2.1 the localhost refer to my laptop. i install the localhost server here

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Batch backup a harddrive without modifying access times C#

    - by johnathan-doena
    I'm trying to write a simple program that will backup my flash drive. I want it to work automatically and silently in the background, and I also want it to be as quick as possible. The thing is, resetting all the access times is useless to me, and something I want to avoid. I know I can read the access times and set them back, but I bet it will fail one day in the future. It would be much simpler to read the files without ever changing it. Also, what is the fastest way to do this? What differences would there be between, say, a flash drive and an external hard drive. I am writing this in C#, as it is the simplest way to do it and it will probably last more generations of Windows..

    Read the article

  • how to create a backup of a file once the current one is changed using php

    - by SkyLar
    I'm playing around with file info in php and i'm wondering if its possible to do the following: once a file is changed get the contents of the file, then put the contents of the file into another file. kinda like a "backup system" so once you make a change to the file a duplicate one is created, each change to the original file overwrites the duplicate one. i'm trying to do this with storing the time in the db, is this possible? i don't know what there is to test against? more clearly, i would like to do the following: execute a script once a specifeced file is changed on the server

    Read the article

  • Cron job execute backup.bash

    - by leejava
    Dear all, I wish to let cron executes backup.bash, but when I try to create cron as below: */1 * * * * /var/www/mango_gis/delete_snapshot.bash /dev/null It didn't execute my script at all. Here is my script as below: #!/bin/bash get() { local pos=$1 shift eval 'echo ${'$pos'}'; } length(){ echo $#; } find_snapshots() { echo $(ec2-describe-snapshots | xargs -n1 basename); } snapshots=$(find_snapshots) len=$(length $snapshots) row_count=$(($len/6)) if(($row_count 6)); then delete_count=$(($row_count-6)) for (( i=1; i<=$delete_count; i++ )); do ec2-delete-snapshot $(echo $(get $((2+$((6*$(($i-1)))))) $snapshots)) /dev/null done fi Please advise... Leakhina

    Read the article

  • Want to improve my simple site/db backup script with auto removal of old backups.

    - by Robert Robb
    I have the following simple script for backing up my website files and db. The script is run each day via a cron job. #!/bin/sh NOW=$(date +"%Y-%m-%d") mysqldump --opt -h localhost -u username -p'password' dbname > /path/to/folder/backup/db-backup-$NOW.sql gzip -f /path/to/folder/backup/db-backup-$NOW.sql tar czf /path/to/folder/backup/web-backup-$NOW.tgz /path/to/folder/web/content/ It works great, but I don't want loads of old backups clogging my system. How can I modify the script to remove any backups older than a week when the script is run?

    Read the article

  • Should I stick only to AWS RDS Automated Backup or DB Snapshots?

    - by James Wise
    I am using AWS RDS for MySQL. With it comes on backup, I understand that amazon provides two types of backup - automated backup and database (DB) snapshot. The difference is explain in here - http://aws.amazon.com/rds/faqs/#23. However, I am still confuse if should I stick to automated backup only or both automated and manual (db snapshots). What do you think guys? What's the setup of your own? I heard to others that automated backup is not reliable due to some unrecoverable database when the DB instance is crashed so the DB snapshots are the way to rescue you. If I will do daily DB snapshots as similar settings to automated backup, I have gonna pay much bunch of bucks. Hope anyone could enlighten me or advise me the right set up. Thanks. James

    Read the article

  • An XEvent a Day (16 of 31) – How Many Checkpoints are Issued During a Full Backup?

    - by Jonathan Kehayias
    This wasn’t my intended blog post for today, but last night a question came across #SQLHelp on Twitter from Varun ( Twitter ). #sqlhelp how many checkpoints are issued during a full backup? The question was answered by Robert Davis (Blog|Twitter) as: Just 1, at the very start. RT @ 1sql : #sqlhelp how many checkpoints are issued during a full backup? This seemed like a great thing to test out with Extended Events so I ran through the available Events in SQL Server 2008, and the only Event related...(read more)

    Read the article

  • How can I install Ubuntu on my Nexus 7 while being able to recover from a nandroid backup?

    - by MagicFab
    I use CyanogenMod and ClockWork Recovery on my Nexus 7. How can my existing full nandroid backup be used to restore my device after installing Ubuntu? The instructions assume "recovery" would mean re-flashing the vanilla image, at factory, data-wiped condition. It would be useful to provide a .zip that can be flash via Clockwork (or other) recovery and back to whatever Nandroid backup there is - much as any other ROM is provided/used.

    Read the article

  • How can I install Ubuntu on my Nexus 7 while being able to recover from an Nandroid backup?

    - by MagicFab
    I use CyanogenMod and ClockWork Recovery on my Nexus 7. How can my existing full nandroid backup be used to restore my device after installing Ubuntu? The instructions assume "recovery" would mean re-flashing the vanilla image, at factory, data-wiped condition. It would be useful to provide a .zip that can be flashed via Clockwork (or other) recovery usin ROM Manager or booting into recovery and back to whatever Nandroid backup there is - much as any other ROM is provided/used.

    Read the article

  • Making a database backup to SDCard on Android

    - by Pentium10
    I am using the below code to write a backup copy to SDCard and I get java.io.IOException: Parent directory of file is not writable: /sdcard/mydbfile.db private class ExportDatabaseFileTask extends AsyncTask<String, Void, Boolean> { private final ProgressDialog dialog = new ProgressDialog(ctx); // can use UI thread here protected void onPreExecute() { this.dialog.setMessage("Exporting database..."); this.dialog.show(); } // automatically done on worker thread (separate from UI thread) protected Boolean doInBackground(final String... args) { File dbFile = new File(Environment.getDataDirectory() + "/data/com.mypkg/databases/mydbfile.db"); File exportDir = new File(Environment.getExternalStorageDirectory(), ""); if (!exportDir.exists()) { exportDir.mkdirs(); } File file = new File(exportDir, dbFile.getName()); try { file.createNewFile(); this.copyFile(dbFile, file); return true; } catch (IOException e) { Log.e("mypck", e.getMessage(), e); return false; } } // can use UI thread here protected void onPostExecute(final Boolean success) { if (this.dialog.isShowing()) { this.dialog.dismiss(); } if (success) { Toast.makeText(ctx, "Export successful!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(ctx, "Export failed", Toast.LENGTH_SHORT).show(); } } void copyFile(File src, File dst) throws IOException { FileChannel inChannel = new FileInputStream(src).getChannel(); FileChannel outChannel = new FileOutputStream(dst).getChannel(); try { inChannel.transferTo(0, inChannel.size(), outChannel); } finally { if (inChannel != null) inChannel.close(); if (outChannel != null) outChannel.close(); } } }

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • Time Machine for Windows?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >