Search Results

Search found 6798 results on 272 pages for 'continuous backup'.

Page 93/272 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • How to backup/restore full-disk encryption ubuntu 11.10?

    - by ggc
    How to backup/restore full-disk encryption ubuntu 11.10? I would like to put the RAW encrypted file system and restore on another computer. Encryption Details: crypt setup via Ubuntu alterate CD Installer only thing unencrypted is /boot File systems setup: boot- j swap-swap everything else-ext 4 Any suggestions? I have considered backing up the file system stripped of encryption, but I would prefer to keep the os encrypted while transferring. Thanks for any help!

    Read the article

  • How to do a partial database backup and restore?

    - by Workshop Alex
    Simple problem. I'm working on a single SQL Server database which is shared between several offices. Each office has their own schema inside this database, thus dividing the database in logical pieces. (Plus one schema that is shared between multiple offices.) The database is stored on a dedicated server and we use a single database to keep the backup/restore procedure easier. The problem, however, is that the Accounting Office might be modifying a lot of data and then the Secretary Office makes a mistake which requires restoration of a backup. Unfortunately, restoring the backup means that Accounting will lose their recently added data. So, the alternative solution is by restoring the backup into a new database, remove the data from the old accounting schema and move the data for accounting only from the backup top the original database. This is the current solution and it's time-consuming and error-prone. So, is there a way to make backups of a single schema, possibly through code? And then to restore just that schema, probably through code too?

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Copy Ubuntu distro with all settings from one computer to a different one

    - by theFisher86
    I'd like to copy my exact setup from my computer at work to my computer at home. I'm trying to figure out how to go about doing that. So far I've figured this much out. On the source computer run dpkg --get-selections > installed-software and backup the installed-software file Backup /etc/apt/sources.list Backup /usr/share/applications/ to save all my custom Quicklists Backup /etc/fstab to save all my network mounts Backup /usr/share/themes/ to save the customization I've done to my themes I'm also going to backup my entire HOME directory. Once I get to the destination computer I'm going to first do just a fresh install of 11.10 Then I'll copy over my HOME directory, /etc/apt/sources.list, /usr/share/appications, /etc/fstab and /usr/share/themes/ Then I'm going to run dpkg --set-selections < installed-software Followed by dselect That should install all of my apps for me. I'm wondering if there's a way/need to backup dconf and gconf settings from the source computer? I guess that's my ultimate question. I'd also like any notes on anything else that might need backed up as well before I undertake this project. I hope this post is legit, I figured other people would be interested in knowing this process and I don't see any other questions that seem to really document this on here. I'd also like to further this project and have each computer routinely backup all the necessary files so that both computer are basically identical at all times. That's stage 2 though...

    Read the article

  • Why isn't the backup file created when running sqlcmd from remote machine?

    - by Ed Gl
    I tried running the sqlcmd from a remote host to do a simple backup of a sql 2008 database. The command goes something like this: sqlcmd -s xxx.xxx.xxx.xx -U username -P some_password -Q "Backup database [db] to \ disk = 'c:\test_backup.bak' with format" I get a succesfull message but the file isn't created. When I run this on the sql manager on the same machine, it works. I thought it was permission problems, but I'm using the same username in both cases. Any thoughts?

    Read the article

  • HTG Explains: What are Shadow Copies and How Can I Use Them to Copy or Backup Locked Files?

    - by Jason Faulkner
    When trying to create simple file copy backups in Windows, a common problem is locked files which can trip up the operation. Whether the file is currently opened by the user or locked by the OS itself, certain files have to be completely unused in order to be copied. Thankfully, there is a simple solution: Shadow Copies. Using our simple tool, you can easily access shadow copies which allows access to point-in-time copies of the currently locked files as created by Windows Restore. Image credit: Best Backup Services How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • How to backup a dev & QA folder website structure?

    - by novicePrgrmr
    A site I just became in charge of uses a really simple two folder structure to host the dev site and the QA site. The sites are hosted on the company servers so I just have the sites' folders mapped on my desktop. I would like to run some kind of backup scheme, but I am finding it hard to think of a way to do this effectively. The problem is that we aren't using any revision control software, and since the servers aren't controlled by me, I don't think I will be able to implement anything like that. Or could I? The entire site is static too, so no DB's or anything besides html, images, PDFs, etc.

    Read the article

  • SQL Server Express backup/restore error: The Media Family on Device is Incorrectly Formed.

    - by Chris
    Basically, I'm having this issue: http://www.sqlcoffee.com/Troubleshooting047.htm What I'm doing is running a script I found online (http://pastebin.com/3n0ZfybL) to do a full backup, then rar'ing up the file and moving it to my computer. The CRC of the backup file inside the rar is correct on both computers, so there is no problem with data being corrupted when I transfer it. But then I go and try to restore the database on my dev computer here and I get the errors "sql server cannot process this media family" ... "msg 3013". Why is this happening? I'd test out the backup on the server I'm getting it from, but it's a production server. Edit: I was about to say how I wasn't doing anything stupid like trying to restore the database to an earlier version of SQL Server, but apparently I am: From: Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (Intel X86) Mar 29 2009 10:27:29 Copyright (c) 1988-2008 Microsoft Corporation Express Edition on Windows NT 6.0 (Build 6002: Service Pack 2) To: Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 6.0 (Build 6001: Service Pack 1) Let me get back to this post after I reinstall this.

    Read the article

  • Advice needed: warm backup solution for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a warm backup server for a SQL Server Express instance running a single database? Sitting beside my production SQL Server 2008 Express box I have a second physical box currently doing nothing. I want to use this second box as a warm backup server by somehow replicating my production database in near real time (a little bit of data loss is acceptable). The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping natively, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • How do I restore a Windows Server 2008 R2 bare metal backup to a Windows Server 2012 R2 Hyper-V instance?

    - by Michael J. Gray
    I have been trying to find a simple way to migrate a physical Windows Server 2008 R2 installation over to a virtual machine hosted on Windows Server 2012 R2 Datacenter Edition /w Hyper-V. I came across the bare metal backup feature on Windows Server 2008 R2 and assumed I would be able to easily back it up and simply restore it into a new virtual machine by booting the installation media and getting into the Windows recovery process. When I attempted this, Hyper-V got into a network based restore process, but I do not have a PXE server or anything like that and I would rather not set it up. I tried mounting the VHD produced in the bare metal backup, just to see if it would somehow work, but it of course did not and failed with an error related to an incorrect boot device. I checked the virtual machine's BIOS settings and everything looked fine. I did not expect this to work anyway, so I stopped working through this method any further. Is there a way to take my bare metal backup and restore it into a virtual machine without a PXE server or SCVMM? I am opening to using proprietary tools but since the last time I did this I used Norton Ghost, which is no longer supported, I figured I would try doing it with what is readily available.

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • Script to email files content

    - by Tarun
    I have created a shell script that takes backups everyday and emails its execution as successfull or unsuccessfull. Now I want that it send the contents of log file it creates with the mail as well. I have seen how to send file as attachement but I want to send the contents of the file as email message and not the file. Please Help. Its code is like #Email Settings Message_Success="Database Backup generated successfully" Message_Failure="Problem occured while generating Database Backup please verify" Subject="Database Backup Status Mail" Recipients="[email protected]" #Verify Backup Created if [ -f "$Path_Mysql_Dump" ]; then echo "Database Backup Created" >> $Path_Log_File echo "$Message_Success" | mail -s "$Subject" "$Recipients" else echo "Database Backup not created please verify the process will terminate" >> $Path_Log_File echo "$Message_Failure" | mail -s "$Subject" "$Recipients" exit -1 fi

    Read the article

  • Should You Delete Windows 7 Service Pack Backup Files to Save Space?

    - by The Geek
    After you install the Windows 7 Service Pack 1 that we mentioned yesterday, you might be wondering how to reclaim some of the lost drive space—which we’ll show you how today—but should you actually do it? Note: If you haven’t installed the new SP1 release yet, be sure to read our post explaining what it entails before you do. Spoiler: it’s mostly bugfixes. Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Read On Phone Pushes Data from Your Desktop to the Appropriate Android App MetroTwit is a Sleek Native Twitter Client for Your Windows System Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • hardlinking takes a lot of space

    - by mr_schlomo
    I made an rsync incremental backup script for my server that will copy a MySQL database backup and a specified folder path to a remote server. Here's the code on Github. Code excerpt from lines 53-57: ############### Create most current hand link echo "Creating most current hard link on backup server $most_recent_backup_link" ssh $remote_backup_server rm -rf ${most_recent_backup_link} ssh $remote_backup_server cp -alv ${remote_backup_folder}/backup-${backup_folder_name}/ ${most_recent_backup_link} I'm having a problem with creating the most current hard links on the backup server (lines 53-57 in the program). Everything works, and rsync only copies about 1-2MB of data. But the hard link copy process uses about 30MB of data. I get a huge laundry list of files that haven't changed and the only ones that have changed are very small in size. Normally this isn't a problem, but when you backup every hour, the backup should be as small as possible. For example, the last backup I did, rsync transferred 1.3MB. But the backup directory grew 35MB. Why are the hard links taking up so much hard drive space?

    Read the article

  • Redirect as a backup trick? w/o modifying DNS?

    - by acidzombie24
    I specifically looked up how to do something like this ( Can you set a backup ip for your server in DNS? ) and the answer basically was you can't. If i say specify 2 ip addresses could i somehow use a HTTP response header to ignore it temporary (say 5mins) and go to the other IP address? Or maybe i can play dead however i'm unsure how to play dead using nginx. I then would like to be available after my box notice the other box is down and be some kind of readonly server. I'm sure something like this has been implemented i am just wondering how i might implement it with 2 boxes. I'm sure it isn't very difficult? How might i redirect traffic from a backup box to my main server without modifying the DNS?

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • What settings to use for fastest reloading of a MySQL backup?

    - by Alex R
    I have a MySQL database which dumps to a 3.5 GB backup (mysqldump) in about 10 minutes. But reloading this backup on a standby / test server takes upwards of 12 hours. What are some settings that would maximize reloading performance? The most promising appear to be innodb_buffer_pool_size, innodb_additional_mem_pool_size, and innodb_log_buffer_size... but I'm reaching the limits of my trial-and-error approach. Which of these settings "should" be the most important? Through trial-and-error I was not able to get more than 70% CPU utilization and 63% memory utilization. I'd like both at 100% during a reload. All tables are InnoDB.

    Read the article

  • How to add exception for backup MX to tumgreyspf?

    - by Waleed Hamra
    I have an Ubuntu raring server running postfix/dovecot as an email server, with tumgreyspf doing greylisting and SPF checks. My problem is that I also have a backup MX server, that is supposed to store my emails temporarily, should my main server ever fails. It usually rejects receiving emails if it finds the main server online and functional. The problem is when it does need to do its job, tumgreyspf rejects all emails from the backup MX with an error like this: Jun 27 16:18:13 hamra postfix/smtpd[28732]: NOQUEUE: reject: RCPT from mxbackup.mydomain.com[x.x.x.x]: 550 5.7.1 <[email protected]>: Recipient address rejected: QUEUE_ID="" SPF Reports: 'SPF fail - not authorized'; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<mxbackup.mydomain.com> any ideas?

    Read the article

  • How to configure Time Machine to also backup an connected external hard drive?

    - by Another Registered User
    I've inserted a sweet SSD into my MacBook Pro, but it had limited space. So I had to export 60 GB of Data to an external hard drive. But now, when I run TimeMachine to backup my stuff, of course it only backups what's on the system disk. Is there a way how to also backup the connected external hard drive together with TimeMachine? Maybe there's a trick to get this work, with a fake folder that's actually the mounted drive? What could I do?

    Read the article

  • Debian crashed, file system is read-only and cannot backup - How Do I find/mount a USB drive?

    - by Spiros
    We have a Debian server (vm's) here at work and the server crashed after a power failure. I can only boot the system in maintenance mode, and the whole file system is set to read only. I can run fsck though maintenance mode, however I would like to get a backup of some files before I do. Problem: I cannot access the net since there is no network connectivity in maintenance mode, and for some reason I try to add a USB flash drive to the computer but I can't find it through the console. Question: how to you find/mount a usb drive on Debian? I have tried several resources from the internet but nothing worked. Is there any other way I could get a backup of my files? I cannot start networking since the filesystem is set to read only. Any help would be appreciated.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >